Test Report: Docker_Linux_crio 21650

                    
                      b4d3c15abadbe5777e1a85cdeffe9926aef5d7ec:2025-09-29:41677
                    
                

Test fail (24/331)

x
+
TestAddons/parallel/Registry (74.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.957496ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002430943s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004153428s
addons_test.go:392: (dbg) Run:  kubectl --context addons-051783 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-051783 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Non-zero exit: kubectl --context addons-051783 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.086859103s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted from default namespace

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:399: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-051783 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:403: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted from default namespace
*
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 ip
2025/09/29 08:38:04 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.33326756s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-575596 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:37:13 addons-051783 crio[938]: time="2025-09-29 08:37:13.740196625Z" level=info msg="Created container b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c: headlamp/headlamp-85f8f8dc54-9nxzs/headlamp" id=75344431-63f7-4c8a-871d-9072888abbb3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 08:37:13 addons-051783 crio[938]: time="2025-09-29 08:37:13.740791476Z" level=info msg="Starting container: b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c" id=b444deb4-ba60-433f-a682-49db5591304b name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 08:37:13 addons-051783 crio[938]: time="2025-09-29 08:37:13.747577459Z" level=info msg="Started container" PID=9803 containerID=b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c description=headlamp/headlamp-85f8f8dc54-9nxzs/headlamp id=b444deb4-ba60-433f-a682-49db5591304b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8b2bdbee285104b3e0d7743fb9c59b2a5c7fdc4cea69f7c6d33260162e43efc
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.577008236Z" level=info msg="Stopping container: b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c (timeout: 30s)" id=4645548f-2b76-4616-8f6e-af916aec1ee1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.714786099Z" level=info msg="Stopped container b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c: headlamp/headlamp-85f8f8dc54-9nxzs/headlamp" id=4645548f-2b76-4616-8f6e-af916aec1ee1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.715449641Z" level=info msg="Stopping pod sandbox: b8b2bdbee285104b3e0d7743fb9c59b2a5c7fdc4cea69f7c6d33260162e43efc" id=499f8dfe-52a6-49db-a94c-0f8c48b6d39e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.715762953Z" level=info msg="Got pod network &{Name:headlamp-85f8f8dc54-9nxzs Namespace:headlamp ID:b8b2bdbee285104b3e0d7743fb9c59b2a5c7fdc4cea69f7c6d33260162e43efc UID:a4575d99-9a3b-4cda-9731-fdad2c9fed6d NetNS:/var/run/netns/8d2f4762-f052-4af8-b8c7-b74f9acd55b5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.715937798Z" level=info msg="Deleting pod headlamp_headlamp-85f8f8dc54-9nxzs from CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:37:20 addons-051783 crio[938]: time="2025-09-29 08:37:20.739350488Z" level=info msg="Stopped pod sandbox: b8b2bdbee285104b3e0d7743fb9c59b2a5c7fdc4cea69f7c6d33260162e43efc" id=499f8dfe-52a6-49db-a94c-0f8c48b6d39e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:37:21 addons-051783 crio[938]: time="2025-09-29 08:37:21.166925898Z" level=info msg="Removing container: b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c" id=94ba03d6-dde3-4d0c-9c78-34c576b027ef name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:37:21 addons-051783 crio[938]: time="2025-09-29 08:37:21.189359743Z" level=info msg="Removed container b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c: headlamp/headlamp-85f8f8dc54-9nxzs/headlamp" id=94ba03d6-dde3-4d0c-9c78-34c576b027ef name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:37:23 addons-051783 crio[938]: time="2025-09-29 08:37:23.959101845Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=24ab8004-4fb7-4243-b12d-f13fe6b754ef name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:23 addons-051783 crio[938]: time="2025-09-29 08:37:23.959105797Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=7d09f0ab-967b-4f1c-a5d6-bb3b327466c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:23 addons-051783 crio[938]: time="2025-09-29 08:37:23.959808615Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=24ab8004-4fb7-4243-b12d-f13fe6b754ef name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:23 addons-051783 crio[938]: time="2025-09-29 08:37:23.959933635Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=7d09f0ab-967b-4f1c-a5d6-bb3b327466c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:37 addons-051783 crio[938]: time="2025-09-29 08:37:37.958236776Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=12a2e900-4e40-45ad-9e99-fbea0583fee0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:37 addons-051783 crio[938]: time="2025-09-29 08:37:37.958571268Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=12a2e900-4e40-45ad-9e99-fbea0583fee0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:44 addons-051783 crio[938]: time="2025-09-29 08:37:44.316459024Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6815369b-2a3a-42e5-96f2-f9957f5abcfd name=/runtime.v1.ImageService/PullImage
	Sep 29 08:37:44 addons-051783 crio[938]: time="2025-09-29 08:37:44.329519184Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 08:37:51 addons-051783 crio[938]: time="2025-09-29 08:37:51.959150402Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=8951ff82-d606-477c-b766-a41ecd692a70 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:51 addons-051783 crio[938]: time="2025-09-29 08:37:51.959457082Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=8951ff82-d606-477c-b766-a41ecd692a70 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:57 addons-051783 crio[938]: time="2025-09-29 08:37:57.958240291Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=b7a4307c-6e13-4c58-b9f7-338ad6577bd5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:37:57 addons-051783 crio[938]: time="2025-09-29 08:37:57.958549726Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=b7a4307c-6e13-4c58-b9f7-338ad6577bd5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:38:02 addons-051783 crio[938]: time="2025-09-29 08:38:02.958065712Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=61408d40-cf44-444e-9d40-5666c57903c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:38:02 addons-051783 crio[938]: time="2025-09-29 08:38:02.958355356Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=61408d40-cf44-444e-9d40-5666c57903c7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          55 seconds ago       Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	80836b6027c82       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             4 minutes ago        Running             controller                               0                   3f400eb1db037       ingress-nginx-controller-9cc49f96f-qxqnk
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            4 minutes ago        Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	aa96d1eb6eb01       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              5 minutes ago        Running             registry-proxy                           0                   98289dafbaebe       registry-proxy-n2gtf
	45863f8b96f32       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      5 minutes ago        Running             volume-snapshot-controller               0                   f6de9f678281f       snapshot-controller-7d9fbc56b8-xpkwb
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   5 minutes ago        Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             5 minutes ago        Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	7cd9c383cc30b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   5 minutes ago        Exited              patch                                    0                   748502b4be4ae       ingress-nginx-admission-patch-scvfj
	a07e229bf44a3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      5 minutes ago        Running             volume-snapshot-controller               0                   6d94b7786d291       snapshot-controller-7d9fbc56b8-n65gp
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              5 minutes ago        Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             6 minutes ago        Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	64ec0688b1d33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago        Exited              create                                   0                   544ece1299156       ingress-nginx-admission-create-rbxvf
	249986f2864e5       docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                           6 minutes ago        Running             registry                                 0                   035aaf2fb1fe4       registry-66898fdd98-mpkgd
	0b9d99dc227ef       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               7 minutes ago        Running             cloud-spanner-emulator                   0                   6b5028c3929cf       cloud-spanner-emulator-85f6b7fc65-8dpkv
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago        Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago        Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             7 minutes ago        Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             7 minutes ago        Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago        Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago        Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago        Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago        Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.16:38635 - 10950 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003785881s
	[INFO] 10.244.0.16:34383 - 26680 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086853s
	[INFO] 10.244.0.16:34383 - 26350 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000134221s
	[INFO] 10.244.0.16:34366 - 8372 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070171s
	[INFO] 10.244.0.16:34366 - 8631 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000110185s
	[INFO] 10.244.0.16:41265 - 49477 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000071368s
	[INFO] 10.244.0.16:41265 - 49294 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000128673s
	[INFO] 10.244.0.16:45935 - 39249 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012754s
	[INFO] 10.244.0.16:45935 - 39430 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176331s
	[INFO] 10.244.0.22:53196 - 35135 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000275242s
	[INFO] 10.244.0.22:51797 - 33403 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002925s
	[INFO] 10.244.0.22:46539 - 50968 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149826s
	[INFO] 10.244.0.22:43614 - 36564 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192618s
	[INFO] 10.244.0.22:59131 - 55683 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131163s
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:38:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:37:36 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:37:36 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:37:36 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:37:36 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  default                     cloud-spanner-emulator-85f6b7fc65-8dpkv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-p475s                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qxqnk    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m53s
	  kube-system                 amd-gpu-device-plugin-xvf9b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 coredns-66bc5c9577-n8bx8                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m54s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 csi-hostpathplugin-59n9q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 etcd-addons-051783                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m59s
	  kube-system                 kindnet-47v7m                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m54s
	  kube-system                 kube-apiserver-addons-051783                250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-controller-manager-addons-051783       200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-proxy-wbl7p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 kube-scheduler-addons-051783                100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 registry-66898fdd98-mpkgd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 registry-proxy-n2gtf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-n65gp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-xpkwb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2vsqw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             438Mi (1%)  476Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m53s  kube-proxy       
	  Normal  Starting                 8m     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m59s  kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s  kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s  kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m55s  node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                7m13s  kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:02.977228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.983452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.989881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.997494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.003681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.011615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:38:05 up  2:20,  0 users,  load average: 0.22, 0.51, 0.82
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:36:02.481975       1 main.go:301] handling current node
	I0929 08:36:12.477905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:36:12.477942       1 main.go:301] handling current node
	I0929 08:36:22.481075       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:36:22.481107       1 main.go:301] handling current node
	I0929 08:36:32.475934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:36:32.475987       1 main.go:301] handling current node
	I0929 08:36:42.475457       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:36:42.475509       1 main.go:301] handling current node
	I0929 08:36:52.477907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:36:52.477942       1 main.go:301] handling current node
	I0929 08:37:02.476024       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:02.476081       1 main.go:301] handling current node
	I0929 08:37:12.477030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:12.477068       1 main.go:301] handling current node
	I0929 08:37:22.476009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:22.476072       1 main.go:301] handling current node
	I0929 08:37:32.480923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:32.480967       1 main.go:301] handling current node
	I0929 08:37:42.478908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:42.478950       1 main.go:301] handling current node
	I0929 08:37:52.479909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:52.479942       1 main.go:301] handling current node
	I0929 08:38:02.477986       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:02.478037       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	W0929 08:30:40.575542       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0929 08:30:52.660152       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.660293       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.661168       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.661206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.680870       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.680901       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.682064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.682170       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:59.130480       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 08:30:59.130524       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.130558       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	I0929 08:30:10.528446       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:30:10.528568       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:30:10.529057       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 08:30:10.531230       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 08:30:10.531262       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 08:30:10.531324       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 08:30:10.531387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 08:30:10.531454       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 08:30:10.531471       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 08:30:10.531993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:10.537696       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:30:10.537989       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-051783" podCIDRs=["10.244.0.0/24"]
	I0929 08:30:10.542899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:30:10.552460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:30:12.553957       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0929 08:30:40.536876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 08:30:40.537102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 08:30:40.537173       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 08:30:40.560116       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 08:30:40.563366       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:30:40.638265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:40.663861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 08:30:55.534409       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0929 08:36:58.265328       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 08:37:30.688902       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:37:21 addons-051783 kubelet[1568]: E0929 08:37:21.190137    1568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c\": container with ID starting with b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c not found: ID does not exist" containerID="b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c"
	Sep 29 08:37:21 addons-051783 kubelet[1568]: I0929 08:37:21.190178    1568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c"} err="failed to get container status \"b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c\": rpc error: code = NotFound desc = could not find container \"b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c\": container with ID starting with b70351aa2e586cb8bd83f4f55b3cd3aeb2e307644874fbc7d50bdb26b9bc868c not found: ID does not exist"
	Sep 29 08:37:21 addons-051783 kubelet[1568]: I0929 08:37:21.960449    1568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4575d99-9a3b-4cda-9731-fdad2c9fed6d" path="/var/lib/kubelet/pods/a4575d99-9a3b-4cda-9731-fdad2c9fed6d/volumes"
	Sep 29 08:37:23 addons-051783 kubelet[1568]: I0929 08:37:23.958543    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xvf9b" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:37:23 addons-051783 kubelet[1568]: E0929 08:37:23.960185    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:37:26 addons-051783 kubelet[1568]: E0929 08:37:26.032902    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135046032686048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:26 addons-051783 kubelet[1568]: E0929 08:37:26.032983    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135046032686048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:36 addons-051783 kubelet[1568]: E0929 08:37:36.035136    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135056034917154  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:36 addons-051783 kubelet[1568]: E0929 08:37:36.035179    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135056034917154  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:37 addons-051783 kubelet[1568]: E0929 08:37:37.958891    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:37:40 addons-051783 kubelet[1568]: I0929 08:37:40.957826    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:37:44 addons-051783 kubelet[1568]: E0929 08:37:44.315971    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 08:37:44 addons-051783 kubelet[1568]: E0929 08:37:44.316043    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 08:37:44 addons-051783 kubelet[1568]: E0929 08:37:44.316254    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container yakd start failed in pod yakd-dashboard-5ff678cb9-2vsqw_yakd-dashboard(64489d6d-e5af-42b1-8efc-47e8285d526b): ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:37:44 addons-051783 kubelet[1568]: E0929 08:37:44.316305    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ErrImagePull: \"reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-2vsqw" podUID="64489d6d-e5af-42b1-8efc-47e8285d526b"
	Sep 29 08:37:45 addons-051783 kubelet[1568]: I0929 08:37:45.959425    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-mpkgd" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:37:46 addons-051783 kubelet[1568]: E0929 08:37:46.037134    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135066036910967  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:46 addons-051783 kubelet[1568]: E0929 08:37:46.037168    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135066036910967  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:51 addons-051783 kubelet[1568]: E0929 08:37:51.959884    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:37:56 addons-051783 kubelet[1568]: E0929 08:37:56.039134    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135076038883217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:56 addons-051783 kubelet[1568]: E0929 08:37:56.039164    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135076038883217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:37:57 addons-051783 kubelet[1568]: E0929 08:37:57.958898    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-2vsqw" podUID="64489d6d-e5af-42b1-8efc-47e8285d526b"
	Sep 29 08:38:02 addons-051783 kubelet[1568]: E0929 08:38:02.958747    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:38:06 addons-051783 kubelet[1568]: E0929 08:38:06.041217    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135086040977909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	Sep 29 08:38:06 addons-051783 kubelet[1568]: E0929 08:38:06.041250    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135086040977909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:450067}  inodes_used:{value:177}}"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:37:41.168191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:43.171311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:43.176586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:45.180744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:45.184767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:47.188001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:47.191803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:49.195394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:49.201002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:51.203785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:51.207579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:53.210465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:53.215692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:55.218769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:55.222710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:57.225622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:57.229588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:59.232675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:37:59.236689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:01.239515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:01.243460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:03.246332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:03.250124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:05.253386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:38:05.257553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx registry-test ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b csi-hostpathplugin-59n9q kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx registry-test ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b csi-hostpathplugin-59n9q kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx registry-test ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b csi-hostpathplugin-59n9q kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw: exit status 1 (85.774803ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  67s   default-scheduler  Successfully assigned default/nginx to addons-051783
	  Normal  Pulling    67s   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-051783/192.168.49.2
	Start Time:                Mon, 29 Sep 2025 08:37:04 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        
	IPs:                       <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfjvd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kfjvd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  63s   default-scheduler  Successfully assigned default/registry-test to addons-051783
	  Normal  Pulling    63s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rbxvf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-scvfj" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-xvf9b" not found
	Error from server (NotFound): pods "csi-hostpathplugin-59n9q" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-2vsqw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx registry-test ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b csi-hostpathplugin-59n9q kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (74.32s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-051783 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-051783 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-051783 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b3f305e2-2997-431f-b6d3-7d97f0b357aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-29 08:45:00.720981025 +0000 UTC m=+948.367606478
addons_test.go:252: (dbg) Run:  kubectl --context addons-051783 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-051783 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-051783/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.25
IPs:
IP:  10.244.0.25
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wrnn8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m                   default-scheduler  Successfully assigned default/nginx to addons-051783
Warning  Failed     6m45s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m53s (x4 over 8m)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     76s (x4 over 6m45s)  kubelet            Error: ErrImagePull
Warning  Failed     76s (x3 over 5m42s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x10 over 6m45s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2s (x10 over 6m45s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-051783 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-051783 logs nginx -n default: exit status 1 (64.740205ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-051783 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.341662551s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:39 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:41 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:43 UTC │ 29 Sep 25 08:43 UTC │
	│ addons  │ addons-051783 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:44 UTC │ 29 Sep 25 08:44 UTC │
	│ addons  │ addons-051783 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:44 UTC │ 29 Sep 25 08:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.397095727Z" level=info msg="Stopping container: a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf (timeout: 30s)" id=1c91882b-aae9-4c1a-ba09-c70008c45a7e name=/runtime.v1.RuntimeService/StopContainer
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.538482427Z" level=info msg="Stopped container 45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f: kube-system/snapshot-controller-7d9fbc56b8-xpkwb/volume-snapshot-controller" id=38996990-745f-4e34-80f5-457de542a1da name=/runtime.v1.RuntimeService/StopContainer
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.538552187Z" level=info msg="Stopped container a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf: kube-system/snapshot-controller-7d9fbc56b8-n65gp/volume-snapshot-controller" id=1c91882b-aae9-4c1a-ba09-c70008c45a7e name=/runtime.v1.RuntimeService/StopContainer
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539134667Z" level=info msg="Stopping pod sandbox: 6d94b7786d2915f4214f448fb35adab2481a102bc5df4f3dd362d8176443cb29" id=1cfa9fec-b1d4-4cb7-aec1-849cdfc5d552 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539139742Z" level=info msg="Stopping pod sandbox: f6de9f678281f56eeedbf6ae6901b4ae6467e038063d202c016767b6202aac56" id=300e3690-d2b3-4630-9654-8f880843caf0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539460629Z" level=info msg="Got pod network &{Name:snapshot-controller-7d9fbc56b8-n65gp Namespace:kube-system ID:6d94b7786d2915f4214f448fb35adab2481a102bc5df4f3dd362d8176443cb29 UID:d8bddc78-350d-45c5-9361-48262c9442a1 NetNS:/var/run/netns/757ff5c5-a521-4844-a049-e522c533513a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539531721Z" level=info msg="Got pod network &{Name:snapshot-controller-7d9fbc56b8-xpkwb Namespace:kube-system ID:f6de9f678281f56eeedbf6ae6901b4ae6467e038063d202c016767b6202aac56 UID:2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e NetNS:/var/run/netns/bced5fc6-2c01-4d99-8e9b-a0dd0e6e7f5c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539653349Z" level=info msg="Deleting pod kube-system_snapshot-controller-7d9fbc56b8-n65gp from CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.539712004Z" level=info msg="Deleting pod kube-system_snapshot-controller-7d9fbc56b8-xpkwb from CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.559556735Z" level=info msg="Stopped pod sandbox: 6d94b7786d2915f4214f448fb35adab2481a102bc5df4f3dd362d8176443cb29" id=1cfa9fec-b1d4-4cb7-aec1-849cdfc5d552 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:31 addons-051783 crio[938]: time="2025-09-29 08:44:31.567559977Z" level=info msg="Stopped pod sandbox: f6de9f678281f56eeedbf6ae6901b4ae6467e038063d202c016767b6202aac56" id=300e3690-d2b3-4630-9654-8f880843caf0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:32 addons-051783 crio[938]: time="2025-09-29 08:44:32.322960023Z" level=info msg="Removing container: a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf" id=7fc0c4f5-996a-4fc3-a5e8-355656a9e2ee name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:44:32 addons-051783 crio[938]: time="2025-09-29 08:44:32.342225971Z" level=info msg="Removed container a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf: kube-system/snapshot-controller-7d9fbc56b8-n65gp/volume-snapshot-controller" id=7fc0c4f5-996a-4fc3-a5e8-355656a9e2ee name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:44:32 addons-051783 crio[938]: time="2025-09-29 08:44:32.344455883Z" level=info msg="Removing container: 45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f" id=a1b6a80d-1bfd-49a8-a4fb-7a6d7630acc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:44:32 addons-051783 crio[938]: time="2025-09-29 08:44:32.363350130Z" level=info msg="Removed container 45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f: kube-system/snapshot-controller-7d9fbc56b8-xpkwb/volume-snapshot-controller" id=a1b6a80d-1bfd-49a8-a4fb-7a6d7630acc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 29 08:44:36 addons-051783 crio[938]: time="2025-09-29 08:44:36.958718999Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b793c43f-33de-43ba-ae84-bc8e88df96ba name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:36 addons-051783 crio[938]: time="2025-09-29 08:44:36.959061837Z" level=info msg="Image docker.io/nginx:alpine not found" id=b793c43f-33de-43ba-ae84-bc8e88df96ba name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:47 addons-051783 crio[938]: time="2025-09-29 08:44:47.958433544Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3b1cb534-9e4b-4318-bf8b-5087cf2d2c97 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:47 addons-051783 crio[938]: time="2025-09-29 08:44:47.958719703Z" level=info msg="Image docker.io/nginx:alpine not found" id=3b1cb534-9e4b-4318-bf8b-5087cf2d2c97 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:48 addons-051783 crio[938]: time="2025-09-29 08:44:48.518622250Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f0a411a2-503b-4fdb-8940-d9da9cbd7489 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:44:48 addons-051783 crio[938]: time="2025-09-29 08:44:48.522884474Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 29 08:44:58 addons-051783 crio[938]: time="2025-09-29 08:44:58.958949040Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fcd123a5-fd03-46f4-8d6b-88b1e7c02484 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:58 addons-051783 crio[938]: time="2025-09-29 08:44:58.959272640Z" level=info msg="Image docker.io/nginx:alpine not found" id=fcd123a5-fd03-46f4-8d6b-88b1e7c02484 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:45:01 addons-051783 crio[938]: time="2025-09-29 08:45:01.960534083Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=2d84af8e-bca2-41a6-a1a1-269168a40a68 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:45:01 addons-051783 crio[938]: time="2025-09-29 08:45:01.961137077Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=2d84af8e-bca2-41a6-a1a1-269168a40a68 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	15470dfdbc373       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          8 minutes ago       Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	80836b6027c82       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             11 minutes ago      Running             controller                               0                   3f400eb1db037       ingress-nginx-controller-9cc49f96f-qxqnk
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            11 minutes ago      Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             12 minutes ago      Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	7cd9c383cc30b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              patch                                    0                   748502b4be4ae       ingress-nginx-admission-patch-scvfj
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             13 minutes ago      Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	64ec0688b1d33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   13 minutes ago      Exited              create                                   0                   544ece1299156       ingress-nginx-admission-create-rbxvf
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             14 minutes ago      Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             14 minutes ago      Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             14 minutes ago      Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             14 minutes ago      Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             15 minutes ago      Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             15 minutes ago      Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             15 minutes ago      Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             15 minutes ago      Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	[INFO] 10.244.0.26:59060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000230685s
	[INFO] 10.244.0.26:45421 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000136278s
	[INFO] 10.244.0.26:44591 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116365s
	[INFO] 10.244.0.26:57553 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117524s
	[INFO] 10.244.0.26:49960 - 6 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003803543s
	[INFO] 10.244.0.26:37529 - 7 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.004482599s
	[INFO] 10.244.0.26:51766 - 8 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.147452363s
	[INFO] 10.244.0.26:46339 - 9 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000143392s
	[INFO] 10.244.0.26:35817 - 10 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000114781s
	[INFO] 10.244.0.26:57333 - 11 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128127s
	[INFO] 10.244.0.26:33589 - 12 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009747s
	[INFO] 10.244.0.26:38381 - 13 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003185786s
	[INFO] 10.244.0.26:42582 - 14 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.005148102s
	[INFO] 10.244.0.26:42532 - 15 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.130600393s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:44:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  gadget                      gadget-p475s                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qxqnk                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-n8bx8                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-59n9q                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-051783                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-47v7m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-051783                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-051783                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-wbl7p                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-051783                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                14m   kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:02.997494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.003681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.011615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:40:02.621144Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2025-09-29T08:40:02.644347Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1444,"took":"22.608235ms","hash":1501025519,"current-db-size-bytes":6053888,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3846144,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2025-09-29T08:40:02.644399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1501025519,"revision":1444,"compact-revision":-1}
	
	
	==> kernel <==
	 08:45:02 up  2:27,  0 users,  load average: 0.15, 0.24, 0.57
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:42:52.482978       1 main.go:301] handling current node
	I0929 08:43:02.482239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:02.482281       1 main.go:301] handling current node
	I0929 08:43:12.475406       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:12.475464       1 main.go:301] handling current node
	I0929 08:43:22.478206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:22.478244       1 main.go:301] handling current node
	I0929 08:43:32.480022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:32.480059       1 main.go:301] handling current node
	I0929 08:43:42.476907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:42.476958       1 main.go:301] handling current node
	I0929 08:43:52.477932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:52.477972       1 main.go:301] handling current node
	I0929 08:44:02.478944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:02.478994       1 main.go:301] handling current node
	I0929 08:44:12.476129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:12.476160       1 main.go:301] handling current node
	I0929 08:44:22.479291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:22.479359       1 main.go:301] handling current node
	I0929 08:44:32.475567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:32.475608       1 main.go:301] handling current node
	I0929 08:44:42.475940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:42.475983       1 main.go:301] handling current node
	I0929 08:44:52.478944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:52.478972       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	I0929 08:40:03.495564       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 08:44:31.320478       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.320533       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.334332       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.334473       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.335600       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.335645       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.348945       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.349079       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.357633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.357677       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 08:44:32.336441       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 08:44:32.358621       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 08:44:32.370970       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	E0929 08:44:33.496901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:33.783384       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:33.784351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:35.179072       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:35.180004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:35.925976       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:35.927057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:36.445774       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:36.446808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:39.552268       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:39.553253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:40.437591       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:40.438547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 08:44:40.765860       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 08:44:40.765900       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:44:40.787113       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:44:40.787159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:44:40.877583       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:40.878669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:48.471960       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:48.472999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:49.788460       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:49.789482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:52.217443       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:52.218703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:44:32 addons-051783 kubelet[1568]: I0929 08:44:32.342548    1568 scope.go:117] "RemoveContainer" containerID="a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: E0929 08:44:32.343148    1568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf\": container with ID starting with a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf not found: ID does not exist" containerID="a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: I0929 08:44:32.343189    1568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf"} err="failed to get container status \"a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf\": rpc error: code = NotFound desc = could not find container \"a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf\": container with ID starting with a07e229bf44a31c56de6004c7641c4ec4de9136493b72cc7feff8ec66fb163bf not found: ID does not exist"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: I0929 08:44:32.343210    1568 scope.go:117] "RemoveContainer" containerID="45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: I0929 08:44:32.363684    1568 scope.go:117] "RemoveContainer" containerID="45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: E0929 08:44:32.364111    1568 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f\": container with ID starting with 45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f not found: ID does not exist" containerID="45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f"
	Sep 29 08:44:32 addons-051783 kubelet[1568]: I0929 08:44:32.364147    1568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f"} err="failed to get container status \"45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f\": rpc error: code = NotFound desc = could not find container \"45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f\": container with ID starting with 45863f8b96f321eb3838895848c350f0f7bd4e442e099f39411e04f42744283f not found: ID does not exist"
	Sep 29 08:44:33 addons-051783 kubelet[1568]: I0929 08:44:33.960069    1568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e" path="/var/lib/kubelet/pods/2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e/volumes"
	Sep 29 08:44:33 addons-051783 kubelet[1568]: I0929 08:44:33.960416    1568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8bddc78-350d-45c5-9361-48262c9442a1" path="/var/lib/kubelet/pods/d8bddc78-350d-45c5-9361-48262c9442a1/volumes"
	Sep 29 08:44:36 addons-051783 kubelet[1568]: E0929 08:44:36.138485    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135476138246359  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:36 addons-051783 kubelet[1568]: E0929 08:44:36.138525    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135476138246359  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:36 addons-051783 kubelet[1568]: E0929 08:44:36.959439    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:44:40 addons-051783 kubelet[1568]: E0929 08:44:40.958719    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:44:46 addons-051783 kubelet[1568]: E0929 08:44:46.141007    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135486140748126  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:46 addons-051783 kubelet[1568]: E0929 08:44:46.141039    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135486140748126  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:47 addons-051783 kubelet[1568]: E0929 08:44:47.960213    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:44:48 addons-051783 kubelet[1568]: E0929 08:44:48.518219    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Sep 29 08:44:48 addons-051783 kubelet[1568]: E0929 08:44:48.518281    1568 kuberuntime_image.go:43] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Sep 29 08:44:48 addons-051783 kubelet[1568]: E0929 08:44:48.518541    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container minikube-ingress-dns start failed in pod kube-ingress-dns-minikube_kube-system(ec159452-503b-4642-b822-ea6cdac8e16e): ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:44:48 addons-051783 kubelet[1568]: E0929 08:44:48.518620    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:44:54 addons-051783 kubelet[1568]: E0929 08:44:54.958040    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:44:56 addons-051783 kubelet[1568]: E0929 08:44:56.143320    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135496143040572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:56 addons-051783 kubelet[1568]: E0929 08:44:56.143354    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135496143040572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:58 addons-051783 kubelet[1568]: E0929 08:44:58.959613    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:45:01 addons-051783 kubelet[1568]: E0929 08:45:01.961513    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:44:36.813008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:38.815861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:38.819914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:40.823176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:40.827095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:42.830221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:42.834933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:44.837784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:44.842006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:46.845159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:46.849250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:48.852560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:48.856429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:50.859478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:50.864950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:52.870230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:52.874257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:54.877634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:54.881787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:56.884931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:56.888906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:58.892251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:58.896315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:45:00.899116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:45:00.903526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1 (86.09686ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-051783
	  Warning  Failed     6m48s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m56s (x4 over 8m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     79s (x4 over 6m48s)   kubelet            Error: ErrImagePull
	  Warning  Failed     79s (x3 over 5m45s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x10 over 6m48s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5s (x10 over 6m48s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-z2l94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m36s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
	  Normal   Pulling    2m38s (x3 over 6m35s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     48s (x3 over 5m14s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     48s (x3 over 5m14s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x5 over 5m14s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x5 over 5m14s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdgkp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zdgkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rbxvf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-scvfj" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 addons disable ingress-dns --alsologtostderr -v=1: (1.133544792s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 addons disable ingress --alsologtostderr -v=1: (7.692424545s)
--- FAIL: TestAddons/parallel/Ingress (492.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (384.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 08:38:07.649744  386225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 08:38:07.653046  386225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 08:38:07.653070  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:08.153774  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:08.653693  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:09.153036  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:09.653867  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:10.153749  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:10.653760  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:11.153357  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:11.653805  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:12.153610  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:12.653884  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:13.153365  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:13.652854  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:14.153604  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:14.653251  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:15.153904  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:15.653811  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:16.153640  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:16.653758  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:17.153533  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:17.654250  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:18.153752  386225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0929 08:38:18.653088  386225 kapi.go:107] duration metric: took 11.003357697s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 11.003395328s
addons_test.go:552: (dbg) Run:  kubectl --context addons-051783 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-051783 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c75569f9-aafe-41b4-9ffa-4e10d9573809] Pending
helpers_test.go:352: "task-pv-pod" [c75569f9-aafe-41b4-9ffa-4e10d9573809] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-29 08:44:28.258953504 +0000 UTC m=+915.905578952
addons_test.go:567: (dbg) Run:  kubectl --context addons-051783 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-051783 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-051783/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-z2l94:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
Normal   BackOff    2m18s (x2 over 4m39s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2m18s (x2 over 4m39s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m3s (x3 over 6m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     13s (x3 over 4m39s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13s (x3 over 4m39s)    kubelet            Error: ErrImagePull
addons_test.go:567: (dbg) Run:  kubectl --context addons-051783 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-051783 logs task-pv-pod -n default: exit status 1 (64.733802ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-051783 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.354288828s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:39 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:41 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:43 UTC │ 29 Sep 25 08:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:43:10 addons-051783 crio[938]: time="2025-09-29 08:43:10.154591381Z" level=info msg="Deleting pod kube-system_amd-gpu-device-plugin-xvf9b from CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:43:10 addons-051783 crio[938]: time="2025-09-29 08:43:10.179890873Z" level=info msg="Stopped pod sandbox: 7fb63ae5a120a69174778c9cc706e2e9ba4bcfcf3c8a583e9ac04ac6ae8920c4" id=bf0157b2-6e42-4017-aded-7faa1c0c6430 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:43:14 addons-051783 crio[938]: time="2025-09-29 08:43:14.321482270Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=a8545988-267c-431a-baaf-46c0d8636c9d name=/runtime.v1.ImageService/PullImage
	Sep 29 08:43:14 addons-051783 crio[938]: time="2025-09-29 08:43:14.325270112Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 08:43:15 addons-051783 crio[938]: time="2025-09-29 08:43:15.123227181Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=a17a9fff-8717-4ffd-a899-d64ca57d2f39 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:15 addons-051783 crio[938]: time="2025-09-29 08:43:15.123592648Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=a17a9fff-8717-4ffd-a899-d64ca57d2f39 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:18 addons-051783 crio[938]: time="2025-09-29 08:43:18.958738617Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=7095a5eb-cfb5-43bd-adee-54558dba8e39 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:18 addons-051783 crio[938]: time="2025-09-29 08:43:18.959113103Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=7095a5eb-cfb5-43bd-adee-54558dba8e39 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:26 addons-051783 crio[938]: time="2025-09-29 08:43:26.958907075Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=c2545343-4fe1-42c0-a5d9-80b21bcb023b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:26 addons-051783 crio[938]: time="2025-09-29 08:43:26.959240209Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=c2545343-4fe1-42c0-a5d9-80b21bcb023b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:44 addons-051783 crio[938]: time="2025-09-29 08:43:44.972781374Z" level=info msg="Pulling image: docker.io/nginx:latest" id=fe24fc52-5676-45d4-ab93-c53900c41cbf name=/runtime.v1.ImageService/PullImage
	Sep 29 08:43:44 addons-051783 crio[938]: time="2025-09-29 08:43:44.979290337Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 08:43:57 addons-051783 crio[938]: time="2025-09-29 08:43:57.958822261Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6fb93785-51fa-442f-ae45-137f275eb24e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:57 addons-051783 crio[938]: time="2025-09-29 08:43:57.959174365Z" level=info msg="Image docker.io/nginx:alpine not found" id=6fb93785-51fa-442f-ae45-137f275eb24e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:06 addons-051783 crio[938]: time="2025-09-29 08:44:06.112503968Z" level=info msg="Stopping pod sandbox: 7fb63ae5a120a69174778c9cc706e2e9ba4bcfcf3c8a583e9ac04ac6ae8920c4" id=ede6a8d7-90ae-4aa0-88f5-0a8c76b1cbb1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:06 addons-051783 crio[938]: time="2025-09-29 08:44:06.112568156Z" level=info msg="Stopped pod sandbox (already stopped): 7fb63ae5a120a69174778c9cc706e2e9ba4bcfcf3c8a583e9ac04ac6ae8920c4" id=ede6a8d7-90ae-4aa0-88f5-0a8c76b1cbb1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:44:06 addons-051783 crio[938]: time="2025-09-29 08:44:06.112902270Z" level=info msg="Removing pod sandbox: 7fb63ae5a120a69174778c9cc706e2e9ba4bcfcf3c8a583e9ac04ac6ae8920c4" id=925ac524-d05c-470a-bdd0-c8647abe447f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:44:06 addons-051783 crio[938]: time="2025-09-29 08:44:06.118801722Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:44:06 addons-051783 crio[938]: time="2025-09-29 08:44:06.118853581Z" level=info msg="Removed pod sandbox: 7fb63ae5a120a69174778c9cc706e2e9ba4bcfcf3c8a583e9ac04ac6ae8920c4" id=925ac524-d05c-470a-bdd0-c8647abe447f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:44:10 addons-051783 crio[938]: time="2025-09-29 08:44:10.958337050Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=44f4a646-0173-45e6-b9f0-3cfd538354f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:10 addons-051783 crio[938]: time="2025-09-29 08:44:10.958588481Z" level=info msg="Image docker.io/nginx:alpine not found" id=44f4a646-0173-45e6-b9f0-3cfd538354f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:15 addons-051783 crio[938]: time="2025-09-29 08:44:15.626300755Z" level=info msg="Pulling image: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=7ee61a48-b6a6-4fe7-b225-faef20832713 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:44:15 addons-051783 crio[938]: time="2025-09-29 08:44:15.630107706Z" level=info msg="Trying to access \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Sep 29 08:44:25 addons-051783 crio[938]: time="2025-09-29 08:44:25.959666634Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=273d2a76-2898-4fd1-a18c-dba9bc5593ce name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:44:25 addons-051783 crio[938]: time="2025-09-29 08:44:25.959901243Z" level=info msg="Image docker.io/nginx:alpine not found" id=273d2a76-2898-4fd1-a18c-dba9bc5593ce name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	15470dfdbc373       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	80836b6027c82       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             11 minutes ago      Running             controller                               0                   3f400eb1db037       ingress-nginx-controller-9cc49f96f-qxqnk
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            11 minutes ago      Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	45863f8b96f32       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   f6de9f678281f       snapshot-controller-7d9fbc56b8-xpkwb
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             11 minutes ago      Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	7cd9c383cc30b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              patch                                    0                   748502b4be4ae       ingress-nginx-admission-patch-scvfj
	a07e229bf44a3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   6d94b7786d291       snapshot-controller-7d9fbc56b8-n65gp
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             12 minutes ago      Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	64ec0688b1d33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              create                                   0                   544ece1299156       ingress-nginx-admission-create-rbxvf
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             13 minutes ago      Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             13 minutes ago      Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             14 minutes ago      Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             14 minutes ago      Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             14 minutes ago      Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             14 minutes ago      Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             14 minutes ago      Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             14 minutes ago      Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	[INFO] 10.244.0.26:59060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000230685s
	[INFO] 10.244.0.26:45421 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000136278s
	[INFO] 10.244.0.26:44591 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116365s
	[INFO] 10.244.0.26:57553 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117524s
	[INFO] 10.244.0.26:49960 - 6 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003803543s
	[INFO] 10.244.0.26:37529 - 7 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.004482599s
	[INFO] 10.244.0.26:51766 - 8 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.147452363s
	[INFO] 10.244.0.26:46339 - 9 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000143392s
	[INFO] 10.244.0.26:35817 - 10 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000114781s
	[INFO] 10.244.0.26:57333 - 11 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128127s
	[INFO] 10.244.0.26:33589 - 12 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009747s
	[INFO] 10.244.0.26:38381 - 13 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003185786s
	[INFO] 10.244.0.26:42582 - 14 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.005148102s
	[INFO] 10.244.0.26:42532 - 15 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.130600393s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:44:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-p475s                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qxqnk                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-n8bx8                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-59n9q                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-051783                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-47v7m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-051783                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-051783                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-wbl7p                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-051783                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-n65gp                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-xpkwb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                13m   kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:02.997494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.003681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.011615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:40:02.621144Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2025-09-29T08:40:02.644347Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1444,"took":"22.608235ms","hash":1501025519,"current-db-size-bytes":6053888,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3846144,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2025-09-29T08:40:02.644399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1501025519,"revision":1444,"compact-revision":-1}
	
	
	==> kernel <==
	 08:44:29 up  2:26,  0 users,  load average: 0.25, 0.27, 0.59
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:42:22.477267       1 main.go:301] handling current node
	I0929 08:42:32.479923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:32.479954       1 main.go:301] handling current node
	I0929 08:42:42.475935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:42.475981       1 main.go:301] handling current node
	I0929 08:42:52.482940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:52.482978       1 main.go:301] handling current node
	I0929 08:43:02.482239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:02.482281       1 main.go:301] handling current node
	I0929 08:43:12.475406       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:12.475464       1 main.go:301] handling current node
	I0929 08:43:22.478206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:22.478244       1 main.go:301] handling current node
	I0929 08:43:32.480022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:32.480059       1 main.go:301] handling current node
	I0929 08:43:42.476907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:42.476958       1 main.go:301] handling current node
	I0929 08:43:52.477932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:52.477972       1 main.go:301] handling current node
	I0929 08:44:02.478944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:02.478994       1 main.go:301] handling current node
	I0929 08:44:12.476129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:12.476160       1 main.go:301] handling current node
	I0929 08:44:22.479291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:22.479359       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	W0929 08:30:52.660152       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.660293       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.661168       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.661206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.680870       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.680901       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.682064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.682170       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:59.130480       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 08:30:59.130524       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.130558       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	I0929 08:40:03.495564       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	E0929 08:30:40.536876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 08:30:40.537102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 08:30:40.537173       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 08:30:40.560116       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 08:30:40.563366       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:30:40.638265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:40.663861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 08:30:55.534409       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0929 08:36:58.265328       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 08:37:30.688902       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0929 08:40:11.149027       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.166100       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.188892       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.222082       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.275741       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.368102       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.541260       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.874921       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:12.528852       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:13.822456       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:16.394998       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:21.527876       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:31.780772       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:52.275224       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I0929 08:41:38.386135       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:43:36 addons-051783 kubelet[1568]: E0929 08:43:36.123718    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135416123453543  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:36 addons-051783 kubelet[1568]: E0929 08:43:36.123753    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135416123453543  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:44 addons-051783 kubelet[1568]: E0929 08:43:44.972273    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:43:44 addons-051783 kubelet[1568]: E0929 08:43:44.972338    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:43:44 addons-051783 kubelet[1568]: E0929 08:43:44.972542    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(b3f305e2-2997-431f-b6d3-7d97f0b357aa): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:43:44 addons-051783 kubelet[1568]: E0929 08:43:44.972613    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:43:46 addons-051783 kubelet[1568]: E0929 08:43:46.125434    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135426125203724  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:46 addons-051783 kubelet[1568]: E0929 08:43:46.125470    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135426125203724  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:55 addons-051783 kubelet[1568]: I0929 08:43:55.958441    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:43:56 addons-051783 kubelet[1568]: E0929 08:43:56.128412    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135436128086208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:56 addons-051783 kubelet[1568]: E0929 08:43:56.128453    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135436128086208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:57 addons-051783 kubelet[1568]: E0929 08:43:57.959500    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:44:06 addons-051783 kubelet[1568]: E0929 08:44:06.131386    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135446131121140  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:06 addons-051783 kubelet[1568]: E0929 08:44:06.131421    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135446131121140  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:10 addons-051783 kubelet[1568]: E0929 08:44:10.958923    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:44:15 addons-051783 kubelet[1568]: E0929 08:44:15.625801    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 08:44:15 addons-051783 kubelet[1568]: E0929 08:44:15.625884    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 08:44:15 addons-051783 kubelet[1568]: E0929 08:44:15.626149    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(c75569f9-aafe-41b4-9ffa-4e10d9573809): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:44:15 addons-051783 kubelet[1568]: E0929 08:44:15.626214    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:44:16 addons-051783 kubelet[1568]: E0929 08:44:16.133524    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135456133282398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:16 addons-051783 kubelet[1568]: E0929 08:44:16.133555    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135456133282398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:25 addons-051783 kubelet[1568]: E0929 08:44:25.960172    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:44:26 addons-051783 kubelet[1568]: E0929 08:44:26.135628    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135466135388286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:26 addons-051783 kubelet[1568]: E0929 08:44:26.135665    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135466135388286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:44:29 addons-051783 kubelet[1568]: E0929 08:44:29.958874    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:44:04.688875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:06.691862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:06.695634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:08.698743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:08.702985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:10.706049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:10.710931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:12.714223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:12.719673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:14.723062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:14.728637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:16.731586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:16.736382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:18.739822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:18.743977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:20.746872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:20.752404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:22.756146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:22.760101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:24.763217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:24.767055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:26.770594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:26.775616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:28.779158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:44:28.783882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1 (101.95071ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m30s                  default-scheduler  Successfully assigned default/nginx to addons-051783
	  Warning  Failed     6m15s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m23s (x4 over 7m30s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     46s (x4 over 6m15s)    kubelet            Error: ErrImagePull
	  Warning  Failed     46s (x3 over 5m12s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x7 over 6m15s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5s (x7 over 6m15s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-z2l94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
	  Normal   Pulling    2m5s (x3 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     15s (x3 over 4m41s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     15s (x3 over 4m41s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x3 over 4m41s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s (x3 over 4m41s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdgkp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zdgkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rbxvf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-scvfj" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable csi-hostpath-driver --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/CSI (384.09s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-051783 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-051783 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-051783 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.284246197s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:39 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:41 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:43 UTC │ 29 Sep 25 08:43 UTC │
	│ addons  │ addons-051783 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:44 UTC │ 29 Sep 25 08:44 UTC │
	│ addons  │ addons-051783 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:44 UTC │ 29 Sep 25 08:44 UTC │
	│ addons  │ addons-051783 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:45 UTC │ 29 Sep 25 08:45 UTC │
	│ addons  │ addons-051783 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:45 UTC │ 29 Sep 25 08:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:45:49 addons-051783 crio[938]: time="2025-09-29 08:45:49.846915126Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.291266064Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3/POD" id=b7637db5-4122-4d5a-9c21-731d14d4e378 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.291331363Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.308145316Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3 Namespace:local-path-storage ID:2363636d64671b511f81e8978281eea816198affd4417488ba8b91baa9c73402 UID:b1b4dc23-d0fc-4122-8726-51da6b925412 NetNS:/var/run/netns/9619adfa-5509-4abc-a246-3666984db608 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.308190220Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3 to CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.318478450Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3 Namespace:local-path-storage ID:2363636d64671b511f81e8978281eea816198affd4417488ba8b91baa9c73402 UID:b1b4dc23-d0fc-4122-8726-51da6b925412 NetNS:/var/run/netns/9619adfa-5509-4abc-a246-3666984db608 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.318593643Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3 for CNI network kindnet (type=ptp)"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.319378119Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.320169309Z" level=info msg="Ran pod sandbox 2363636d64671b511f81e8978281eea816198affd4417488ba8b91baa9c73402 with infra container: local-path-storage/helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3/POD" id=b7637db5-4122-4d5a-9c21-731d14d4e378 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.321520138Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=89306051-337a-4526-bdce-979a9979a38f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:45:51 addons-051783 crio[938]: time="2025-09-29 08:45:51.321771789Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=89306051-337a-4526-bdce-979a9979a38f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.196626554Z" level=info msg="Stopping pod sandbox: 3f400eb1db037812cae9424c7d2aeb8809b91ee1f3a73cbf257f23ce9404b8cb" id=70b621ea-864e-40bf-b03b-0c58e41036f7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.196680871Z" level=info msg="Stopped pod sandbox (already stopped): 3f400eb1db037812cae9424c7d2aeb8809b91ee1f3a73cbf257f23ce9404b8cb" id=70b621ea-864e-40bf-b03b-0c58e41036f7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.197036702Z" level=info msg="Removing pod sandbox: 3f400eb1db037812cae9424c7d2aeb8809b91ee1f3a73cbf257f23ce9404b8cb" id=118fbab6-74d3-4688-97c4-25e7e399b291 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.202275411Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.202308289Z" level=info msg="Removed pod sandbox: 3f400eb1db037812cae9424c7d2aeb8809b91ee1f3a73cbf257f23ce9404b8cb" id=118fbab6-74d3-4688-97c4-25e7e399b291 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.202644692Z" level=info msg="Stopping pod sandbox: 3faa7f01c03cb71d5bfc1da04803ee7282f7cbaa656b6911945f028e92ea2f73" id=96c76a05-af67-4032-b6c6-fa2517b1d3be name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.202670462Z" level=info msg="Stopped pod sandbox (already stopped): 3faa7f01c03cb71d5bfc1da04803ee7282f7cbaa656b6911945f028e92ea2f73" id=96c76a05-af67-4032-b6c6-fa2517b1d3be name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.202971275Z" level=info msg="Removing pod sandbox: 3faa7f01c03cb71d5bfc1da04803ee7282f7cbaa656b6911945f028e92ea2f73" id=ee8992c3-ab6d-41e8-8492-a97559e2d413 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.208628890Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:46:06 addons-051783 crio[938]: time="2025-09-29 08:46:06.208665205Z" level=info msg="Removed pod sandbox: 3faa7f01c03cb71d5bfc1da04803ee7282f7cbaa656b6911945f028e92ea2f73" id=ee8992c3-ab6d-41e8-8492-a97559e2d413 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:46:20 addons-051783 crio[938]: time="2025-09-29 08:46:20.514240592Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=57a86767-b413-4cf3-965d-117d627848cf name=/runtime.v1.ImageService/PullImage
	Sep 29 08:46:20 addons-051783 crio[938]: time="2025-09-29 08:46:20.529212760Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 29 08:46:33 addons-051783 crio[938]: time="2025-09-29 08:46:33.958797473Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=05071cfb-952a-4900-88af-cec85cdc402a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:46:33 addons-051783 crio[938]: time="2025-09-29 08:46:33.959105144Z" level=info msg="Image docker.io/nginx:alpine not found" id=05071cfb-952a-4900-88af-cec85cdc402a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	15470dfdbc373       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          8 minutes ago       Running             csi-snapshotter                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          10 minutes ago      Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            10 minutes ago      Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           11 minutes ago      Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                12 minutes ago      Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            13 minutes ago      Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   13 minutes ago      Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             13 minutes ago      Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              14 minutes ago      Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             14 minutes ago      Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             15 minutes ago      Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             15 minutes ago      Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             16 minutes ago      Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             16 minutes ago      Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             16 minutes ago      Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             16 minutes ago      Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             16 minutes ago      Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             16 minutes ago      Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	[INFO] 10.244.0.26:59060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000230685s
	[INFO] 10.244.0.26:45421 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000136278s
	[INFO] 10.244.0.26:44591 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116365s
	[INFO] 10.244.0.26:57553 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117524s
	[INFO] 10.244.0.26:49960 - 6 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003803543s
	[INFO] 10.244.0.26:37529 - 7 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.004482599s
	[INFO] 10.244.0.26:51766 - 8 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.147452363s
	[INFO] 10.244.0.26:46339 - 9 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000143392s
	[INFO] 10.244.0.26:35817 - 10 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000114781s
	[INFO] 10.244.0.26:57333 - 11 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128127s
	[INFO] 10.244.0.26:33589 - 12 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009747s
	[INFO] 10.244.0.26:38381 - 13 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003185786s
	[INFO] 10.244.0.26:42582 - 14 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.005148102s
	[INFO] 10.244.0.26:42532 - 15 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.130600393s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:46:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:46:08 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:46:08 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:46:08 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:46:08 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  gadget                      gadget-p475s                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-n8bx8                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpathplugin-59n9q                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-051783                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-47v7m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-051783                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-051783                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-wbl7p                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-051783                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                15m   kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:40:02.621144Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2025-09-29T08:40:02.644347Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1444,"took":"22.608235ms","hash":1501025519,"current-db-size-bytes":6053888,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3846144,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2025-09-29T08:40:02.644399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1501025519,"revision":1444,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T08:45:02.627072Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2262}
	{"level":"info","ts":"2025-09-29T08:45:02.647516Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2262,"took":"19.728341ms","hash":4222069836,"current-db-size-bytes":6053888,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3473408,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2025-09-29T08:45:02.647570Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4222069836,"revision":2262,"compact-revision":1444}
	
	
	==> kernel <==
	 08:46:40 up  2:29,  0 users,  load average: 0.13, 0.22, 0.53
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:44:32.475608       1 main.go:301] handling current node
	I0929 08:44:42.475940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:42.475983       1 main.go:301] handling current node
	I0929 08:44:52.478944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:44:52.478972       1 main.go:301] handling current node
	I0929 08:45:02.475997       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:02.476035       1 main.go:301] handling current node
	I0929 08:45:12.475616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:12.475651       1 main.go:301] handling current node
	I0929 08:45:22.475943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:22.475980       1 main.go:301] handling current node
	I0929 08:45:32.476057       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:32.476091       1 main.go:301] handling current node
	I0929 08:45:42.476933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:42.476987       1 main.go:301] handling current node
	I0929 08:45:52.476023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:45:52.476065       1 main.go:301] handling current node
	I0929 08:46:02.483937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:46:02.483977       1 main.go:301] handling current node
	I0929 08:46:12.475967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:46:12.476003       1 main.go:301] handling current node
	I0929 08:46:22.478905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:46:22.478936       1 main.go:301] handling current node
	I0929 08:46:32.475491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:46:32.475536       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	I0929 08:40:03.495564       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 08:44:31.320478       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.320533       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.334332       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.334473       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.335600       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.335645       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.348945       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.349079       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 08:44:31.357633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 08:44:31.357677       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 08:44:32.336441       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 08:44:32.358621       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 08:44:32.370970       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	I0929 08:44:40.787113       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:44:40.787159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:44:40.877583       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:40.878669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:48.471960       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:48.472999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:49.788460       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:49.789482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:44:52.217443       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:44:52.218703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:45:07.864492       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:07.865426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:45:11.506131       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:11.507054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:45:11.534080       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:11.535197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 08:45:15.113933       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	E0929 08:45:42.357374       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:42.358483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:45:54.040755       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:54.041651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:45:56.773199       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:45:56.774407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 08:46:38.168708       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 08:46:38.169749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:45:49 addons-051783 kubelet[1568]: E0929 08:45:49.829962    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 08:45:49 addons-051783 kubelet[1568]: E0929 08:45:49.830162    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(c75569f9-aafe-41b4-9ffa-4e10d9573809): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:45:49 addons-051783 kubelet[1568]: E0929 08:45:49.830215    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:45:51 addons-051783 kubelet[1568]: I0929 08:45:51.061922    1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkkf\" (UniqueName: \"kubernetes.io/projected/b1b4dc23-d0fc-4122-8726-51da6b925412-kube-api-access-cjkkf\") pod \"helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3\" (UID: \"b1b4dc23-d0fc-4122-8726-51da6b925412\") " pod="local-path-storage/helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3"
	Sep 29 08:45:51 addons-051783 kubelet[1568]: I0929 08:45:51.062005    1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b1b4dc23-d0fc-4122-8726-51da6b925412-script\") pod \"helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3\" (UID: \"b1b4dc23-d0fc-4122-8726-51da6b925412\") " pod="local-path-storage/helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3"
	Sep 29 08:45:51 addons-051783 kubelet[1568]: I0929 08:45:51.062085    1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b1b4dc23-d0fc-4122-8726-51da6b925412-data\") pod \"helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3\" (UID: \"b1b4dc23-d0fc-4122-8726-51da6b925412\") " pod="local-path-storage/helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3"
	Sep 29 08:45:56 addons-051783 kubelet[1568]: E0929 08:45:56.157370    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135556157113470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:45:56 addons-051783 kubelet[1568]: E0929 08:45:56.157404    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135556157113470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:03 addons-051783 kubelet[1568]: E0929 08:46:03.958438    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:46:06 addons-051783 kubelet[1568]: E0929 08:46:06.159473    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135566159202673  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:06 addons-051783 kubelet[1568]: E0929 08:46:06.159505    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135566159202673  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:09 addons-051783 kubelet[1568]: I0929 08:46:09.957856    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:46:16 addons-051783 kubelet[1568]: E0929 08:46:16.161663    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135576161430606  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:16 addons-051783 kubelet[1568]: E0929 08:46:16.161691    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135576161430606  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:17 addons-051783 kubelet[1568]: E0929 08:46:17.958975    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:46:20 addons-051783 kubelet[1568]: E0929 08:46:20.513770    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:46:20 addons-051783 kubelet[1568]: E0929 08:46:20.513846    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:46:20 addons-051783 kubelet[1568]: E0929 08:46:20.514100    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(b3f305e2-2997-431f-b6d3-7d97f0b357aa): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:46:20 addons-051783 kubelet[1568]: E0929 08:46:20.514165    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:46:26 addons-051783 kubelet[1568]: E0929 08:46:26.164750    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135586164520902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:26 addons-051783 kubelet[1568]: E0929 08:46:26.164790    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135586164520902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:30 addons-051783 kubelet[1568]: E0929 08:46:30.957988    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:46:33 addons-051783 kubelet[1568]: E0929 08:46:33.959431    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:46:36 addons-051783 kubelet[1568]: E0929 08:46:36.167301    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135596167054753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:46:36 addons-051783 kubelet[1568]: E0929 08:46:36.167343    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135596167054753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:46:15.181863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:17.185440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:17.189756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:19.193210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:19.197441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:21.200748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:21.205773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:23.208933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:23.212726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:25.215823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:25.220089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:27.223555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:27.229407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:29.232595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:29.236506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:31.239285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:31.243052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:33.246347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:33.250408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:35.253684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:35.259191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:37.263058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:37.267176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:39.271644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:46:39.277500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1 (80.426312ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m41s                default-scheduler  Successfully assigned default/nginx to addons-051783
	  Warning  Failed     8m26s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    90s (x5 over 9m41s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     21s (x5 over 8m26s)  kubelet            Error: ErrImagePull
	  Warning  Failed     21s (x4 over 7m23s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x11 over 8m26s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8s (x11 over 8m26s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-z2l94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m14s                default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
	  Normal   Pulling    92s (x4 over 8m13s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     52s (x4 over 6m52s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x4 over 6m52s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x8 over 6m52s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x8 over 6m52s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdgkp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zdgkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.708868942s)
--- FAIL: TestAddons/parallel/LocalPath (345.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (247.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2vsqw" [64489d6d-e5af-42b1-8efc-47e8285d526b] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:337: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-09-29 08:39:26.077264881 +0000 UTC m=+613.723890335
addons_test.go:1047: (dbg) Run:  kubectl --context addons-051783 describe po yakd-dashboard-5ff678cb9-2vsqw -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-051783 describe po yakd-dashboard-5ff678cb9-2vsqw -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-2vsqw
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-051783/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:30:52 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-2vsqw (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c69db (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c69db:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                  From               Message
----     ------            ----                 ----               -------
Warning  FailedScheduling  9m14s                default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         8m34s                default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-2vsqw to addons-051783
Warning  Failed            3m22s                kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": loading manifest for target platform: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            102s (x3 over 7m8s)  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            102s (x4 over 7m8s)  kubelet            Error: ErrImagePull
Normal   BackOff           34s (x10 over 7m8s)  kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed            34s (x10 over 7m8s)  kubelet            Error: ImagePullBackOff
Normal   Pulling           19s (x5 over 8m33s)  kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-051783 logs yakd-dashboard-5ff678cb9-2vsqw -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-051783 logs yakd-dashboard-5ff678cb9-2vsqw -n yakd-dashboard: exit status 1 (71.505551ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-2vsqw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-051783 logs yakd-dashboard-5ff678cb9-2vsqw -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Yakd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.330633315s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:39:00 addons-051783 crio[938]: time="2025-09-29 08:39:00.958222178Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=a80cc25c-68ab-42fd-be50-e7cc9f53cda2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:00 addons-051783 crio[938]: time="2025-09-29 08:39:00.958477880Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=a80cc25c-68ab-42fd-be50-e7cc9f53cda2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.059463565Z" level=info msg="Stopping pod sandbox: 98289dafbaebe704e4bea8ca049e4474a5a76a94c2d660b944747abb0ac64c69" id=b4b7a754-c0c4-48a0-94ca-a0328d7e381f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.059532092Z" level=info msg="Stopped pod sandbox (already stopped): 98289dafbaebe704e4bea8ca049e4474a5a76a94c2d660b944747abb0ac64c69" id=b4b7a754-c0c4-48a0-94ca-a0328d7e381f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.059826871Z" level=info msg="Removing pod sandbox: 98289dafbaebe704e4bea8ca049e4474a5a76a94c2d660b944747abb0ac64c69" id=7508510f-49c7-484d-9ac9-5e47369b3fd4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.065538290Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.065574864Z" level=info msg="Removed pod sandbox: 98289dafbaebe704e4bea8ca049e4474a5a76a94c2d660b944747abb0ac64c69" id=7508510f-49c7-484d-9ac9-5e47369b3fd4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.065932650Z" level=info msg="Stopping pod sandbox: e8a4a17d3caa51d16f4efb380e720a94d295fb2427cbc8444eee9d0407b2a808" id=7f9ecdae-5a88-4135-9f7c-29a03762d82d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.065959809Z" level=info msg="Stopped pod sandbox (already stopped): e8a4a17d3caa51d16f4efb380e720a94d295fb2427cbc8444eee9d0407b2a808" id=7f9ecdae-5a88-4135-9f7c-29a03762d82d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.066227756Z" level=info msg="Removing pod sandbox: e8a4a17d3caa51d16f4efb380e720a94d295fb2427cbc8444eee9d0407b2a808" id=af3967c5-825d-403d-b08d-8c08dd5a62d3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.072216483Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.072249442Z" level=info msg="Removed pod sandbox: e8a4a17d3caa51d16f4efb380e720a94d295fb2427cbc8444eee9d0407b2a808" id=af3967c5-825d-403d-b08d-8c08dd5a62d3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.072623723Z" level=info msg="Stopping pod sandbox: 035aaf2fb1fe44b350ce3a815098da0f7c663205bcf24cbcaee931fabbcfd275" id=4fcc3652-1775-44fd-90d4-6e3dd6fb0616 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.072672920Z" level=info msg="Stopped pod sandbox (already stopped): 035aaf2fb1fe44b350ce3a815098da0f7c663205bcf24cbcaee931fabbcfd275" id=4fcc3652-1775-44fd-90d4-6e3dd6fb0616 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.072985013Z" level=info msg="Removing pod sandbox: 035aaf2fb1fe44b350ce3a815098da0f7c663205bcf24cbcaee931fabbcfd275" id=32d81fb5-7c15-4d37-b330-50b2fd546c02 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.079474420Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:39:06 addons-051783 crio[938]: time="2025-09-29 08:39:06.079506005Z" level=info msg="Removed pod sandbox: 035aaf2fb1fe44b350ce3a815098da0f7c663205bcf24cbcaee931fabbcfd275" id=32d81fb5-7c15-4d37-b330-50b2fd546c02 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:39:07 addons-051783 crio[938]: time="2025-09-29 08:39:07.958113840Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=0d863a64-19d9-4868-903a-b7ef398c72e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:07 addons-051783 crio[938]: time="2025-09-29 08:39:07.958426944Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=0d863a64-19d9-4868-903a-b7ef398c72e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:14 addons-051783 crio[938]: time="2025-09-29 08:39:14.958917879Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=5c961ce2-5ec2-4048-a978-b403f40fa915 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:14 addons-051783 crio[938]: time="2025-09-29 08:39:14.959303431Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=5c961ce2-5ec2-4048-a978-b403f40fa915 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:18 addons-051783 crio[938]: time="2025-09-29 08:39:18.720069106Z" level=info msg="Pulling image: docker.io/nginx:latest" id=27f6858e-2659-4f37-a6cf-f8aad0a838ac name=/runtime.v1.ImageService/PullImage
	Sep 29 08:39:18 addons-051783 crio[938]: time="2025-09-29 08:39:18.736265066Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 08:39:25 addons-051783 crio[938]: time="2025-09-29 08:39:25.959656685Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=50348a20-6c54-4dde-9e59-7a010fe7f229 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:39:25 addons-051783 crio[938]: time="2025-09-29 08:39:25.959979922Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=50348a20-6c54-4dde-9e59-7a010fe7f229 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	15470dfdbc373       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago        Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                4 minutes ago        Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	80836b6027c82       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             6 minutes ago        Running             controller                               0                   3f400eb1db037       ingress-nginx-controller-9cc49f96f-qxqnk
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            6 minutes ago        Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	45863f8b96f32       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago        Running             volume-snapshot-controller               0                   f6de9f678281f       snapshot-controller-7d9fbc56b8-xpkwb
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago        Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago        Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	7cd9c383cc30b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago        Exited              patch                                    0                   748502b4be4ae       ingress-nginx-admission-patch-scvfj
	a07e229bf44a3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago        Running             volume-snapshot-controller               0                   6d94b7786d291       snapshot-controller-7d9fbc56b8-n65gp
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago        Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago        Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	64ec0688b1d33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago        Exited              create                                   0                   544ece1299156       ingress-nginx-admission-create-rbxvf
	0b9d99dc227ef       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               8 minutes ago        Running             cloud-spanner-emulator                   0                   6b5028c3929cf       cloud-spanner-emulator-85f6b7fc65-8dpkv
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago        Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago        Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             9 minutes ago        Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             9 minutes ago        Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago        Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             9 minutes ago        Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             9 minutes ago        Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             9 minutes ago        Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	[INFO] 10.244.0.26:59060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000230685s
	[INFO] 10.244.0.26:45421 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000136278s
	[INFO] 10.244.0.26:44591 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116365s
	[INFO] 10.244.0.26:57553 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117524s
	[INFO] 10.244.0.26:49960 - 6 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003803543s
	[INFO] 10.244.0.26:37529 - 7 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.004482599s
	[INFO] 10.244.0.26:51766 - 8 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.147452363s
	[INFO] 10.244.0.26:46339 - 9 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000143392s
	[INFO] 10.244.0.26:35817 - 10 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000114781s
	[INFO] 10.244.0.26:57333 - 11 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128127s
	[INFO] 10.244.0.26:33589 - 12 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009747s
	[INFO] 10.244.0.26:38381 - 13 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003185786s
	[INFO] 10.244.0.26:42582 - 14 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.005148102s
	[INFO] 10.244.0.26:42532 - 15 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.130600393s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:39:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     cloud-spanner-emulator-85f6b7fc65-8dpkv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gadget                      gadget-p475s                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qxqnk    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         9m15s
	  kube-system                 amd-gpu-device-plugin-xvf9b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 coredns-66bc5c9577-n8bx8                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m16s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 csi-hostpathplugin-59n9q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 etcd-addons-051783                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m21s
	  kube-system                 kindnet-47v7m                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m16s
	  kube-system                 kube-apiserver-addons-051783                250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-controller-manager-addons-051783       200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-wbl7p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-scheduler-addons-051783                100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-n65gp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-xpkwb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2vsqw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             438Mi (1%)  476Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m14s  kube-proxy       
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m17s  node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                8m35s  kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:02.977228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.983452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.989881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:02.997494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.003681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.011615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:39:27 up  2:21,  0 users,  load average: 0.11, 0.42, 0.76
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:37:22.476072       1 main.go:301] handling current node
	I0929 08:37:32.480923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:32.480967       1 main.go:301] handling current node
	I0929 08:37:42.478908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:42.478950       1 main.go:301] handling current node
	I0929 08:37:52.479909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:37:52.479942       1 main.go:301] handling current node
	I0929 08:38:02.477986       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:02.478037       1 main.go:301] handling current node
	I0929 08:38:12.476159       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:12.476202       1 main.go:301] handling current node
	I0929 08:38:22.477967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:22.478016       1 main.go:301] handling current node
	I0929 08:38:32.477898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:32.477936       1 main.go:301] handling current node
	I0929 08:38:42.479923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:42.479955       1 main.go:301] handling current node
	I0929 08:38:52.480916       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:38:52.480949       1 main.go:301] handling current node
	I0929 08:39:02.475922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:39:02.475967       1 main.go:301] handling current node
	I0929 08:39:12.476930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:39:12.476982       1 main.go:301] handling current node
	I0929 08:39:22.479369       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:39:22.479412       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	W0929 08:30:40.575542       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0929 08:30:52.660152       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.660293       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.661168       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.661206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.680870       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.680901       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.682064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.682170       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:59.130480       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 08:30:59.130524       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.130558       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	I0929 08:30:10.528446       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:30:10.528568       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:30:10.529057       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 08:30:10.531230       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 08:30:10.531262       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 08:30:10.531324       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 08:30:10.531387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 08:30:10.531454       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 08:30:10.531471       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 08:30:10.531993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:10.537696       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:30:10.537989       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-051783" podCIDRs=["10.244.0.0/24"]
	I0929 08:30:10.542899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:30:10.552460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:30:12.553957       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0929 08:30:40.536876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 08:30:40.537102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 08:30:40.537173       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 08:30:40.560116       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 08:30:40.563366       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:30:40.638265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:40.663861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 08:30:55.534409       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0929 08:36:58.265328       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 08:37:30.688902       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:38:46 addons-051783 kubelet[1568]: E0929 08:38:46.049263    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135126049045003  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:38:48 addons-051783 kubelet[1568]: E0929 08:38:48.056485    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 29 08:38:48 addons-051783 kubelet[1568]: E0929 08:38:48.056558    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 29 08:38:48 addons-051783 kubelet[1568]: E0929 08:38:48.056811    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container amd-gpu-device-plugin start failed in pod amd-gpu-device-plugin-xvf9b_kube-system(af4f61bf-c919-44d4-8d12-3579f9dce9c6): ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:38:48 addons-051783 kubelet[1568]: E0929 08:38:48.056902    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ErrImagePull: \"reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:38:49 addons-051783 kubelet[1568]: I0929 08:38:49.957871    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:38:52 addons-051783 kubelet[1568]: E0929 08:38:52.959179    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-2vsqw" podUID="64489d6d-e5af-42b1-8efc-47e8285d526b"
	Sep 29 08:38:56 addons-051783 kubelet[1568]: E0929 08:38:56.054298    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135136054097679  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:38:56 addons-051783 kubelet[1568]: E0929 08:38:56.054340    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135136054097679  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:00 addons-051783 kubelet[1568]: I0929 08:39:00.957597    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xvf9b" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:39:00 addons-051783 kubelet[1568]: E0929 08:39:00.958877    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:39:06 addons-051783 kubelet[1568]: E0929 08:39:06.058347    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135146057249277  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:06 addons-051783 kubelet[1568]: E0929 08:39:06.058388    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135146057249277  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:14 addons-051783 kubelet[1568]: I0929 08:39:14.958231    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xvf9b" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:39:14 addons-051783 kubelet[1568]: E0929 08:39:14.959759    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:39:16 addons-051783 kubelet[1568]: E0929 08:39:16.060870    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135156060608713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:16 addons-051783 kubelet[1568]: E0929 08:39:16.060916    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135156060608713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:18 addons-051783 kubelet[1568]: E0929 08:39:18.719629    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:39:18 addons-051783 kubelet[1568]: E0929 08:39:18.719687    1568 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 08:39:18 addons-051783 kubelet[1568]: E0929 08:39:18.719900    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(b3f305e2-2997-431f-b6d3-7d97f0b357aa): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:39:18 addons-051783 kubelet[1568]: E0929 08:39:18.719952    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="b3f305e2-2997-431f-b6d3-7d97f0b357aa"
	Sep 29 08:39:25 addons-051783 kubelet[1568]: I0929 08:39:25.959147    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xvf9b" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:39:25 addons-051783 kubelet[1568]: E0929 08:39:25.960304    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:39:26 addons-051783 kubelet[1568]: E0929 08:39:26.062994    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135166062754535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:39:26 addons-051783 kubelet[1568]: E0929 08:39:26.063029    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135166062754535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:39:03.488695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:05.491881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:05.495544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:07.498568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:07.502716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:09.506046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:09.510191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:11.513374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:11.518896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:13.522898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:13.527194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:15.530878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:15.534539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:17.537730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:17.543430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:19.546624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:19.551185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:21.554873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:21.558657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:23.561985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:23.565984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:25.569481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:25.574711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:27.578525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:39:27.584550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx task-pv-pod ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx task-pv-pod ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw: exit status 1 (76.911019ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m28s                default-scheduler  Successfully assigned default/nginx to addons-051783
	  Warning  Failed     73s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    73s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     73s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    62s (x2 over 2m28s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     10s (x2 over 73s)    kubelet            Error: ErrImagePull
	  Warning  Failed     10s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-z2l94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  61s   default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
	  Normal  Pulling    60s   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rbxvf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-scvfj" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-xvf9b" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-2vsqw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx task-pv-pod ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube yakd-dashboard-5ff678cb9-2vsqw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 addons disable yakd --alsologtostderr -v=1: (2m4.815499972s)
--- FAIL: TestAddons/parallel/Yakd (247.70s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (363.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
helpers_test.go:337: TestAddons/parallel/AmdGpuDevicePlugin: WARNING: pod list for "kube-system" "name=amd-gpu-device-plugin" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1038: ***** TestAddons/parallel/AmdGpuDevicePlugin: pod "name=amd-gpu-device-plugin" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:1038: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
addons_test.go:1038: TestAddons/parallel/AmdGpuDevicePlugin: showing logs for failed pods as of 2025-09-29 08:43:07.024241725 +0000 UTC m=+834.670867171
addons_test.go:1038: (dbg) Run:  kubectl --context addons-051783 describe po amd-gpu-device-plugin-xvf9b -n kube-system
addons_test.go:1038: (dbg) kubectl --context addons-051783 describe po amd-gpu-device-plugin-xvf9b -n kube-system:
Name:                 amd-gpu-device-plugin-xvf9b
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      default
Node:                 addons-051783/192.168.49.2
Start Time:           Mon, 29 Sep 2025 08:30:52 +0000
Labels:               controller-revision-hash=7f87d6fd8d
k8s-app=amd-gpu-device-plugin
name=amd-gpu-device-plugin
pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  DaemonSet/amd-gpu-device-plugin
Containers:
amd-gpu-device-plugin:
Container ID:   
Image:          docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/sys from sys (rw)
/var/lib/kubelet/device-plugins from dp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7wb7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
dp:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/kubelet/device-plugins
HostPathType:  
sys:
Type:          HostPath (bare host directory volume)
Path:          /sys
HostPathType:  
kube-api-access-q7wb7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/arch=amd64
Tolerations:                 CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type     Reason                           Age                   From               Message
----     ------                           ----                  ----               -------
Normal   Scheduled                        12m                   default-scheduler  Successfully assigned kube-system/amd-gpu-device-plugin-xvf9b to addons-051783
Warning  Failed                           4m19s (x4 over 10m)   kubelet            Failed to pull image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f": reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           4m19s (x4 over 10m)   kubelet            Error: ErrImagePull
Warning  FailedToRetrieveImagePullSecret  3m53s (x11 over 12m)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Warning  Failed                           3m53s (x7 over 10m)   kubelet            Error: ImagePullBackOff
Normal   Pulling                          2m47s (x5 over 12m)   kubelet            Pulling image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
Warning  Failed                           24s                   kubelet            Failed to pull image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f": initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff                          13s (x12 over 10m)    kubelet            Back-off pulling image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
addons_test.go:1038: (dbg) Run:  kubectl --context addons-051783 logs amd-gpu-device-plugin-xvf9b -n kube-system
addons_test.go:1038: (dbg) Non-zero exit: kubectl --context addons-051783 logs amd-gpu-device-plugin-xvf9b -n kube-system: exit status 1 (70.93981ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "amd-gpu-device-plugin" in pod "amd-gpu-device-plugin-xvf9b" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:1038: kubectl --context addons-051783 logs amd-gpu-device-plugin-xvf9b -n kube-system: exit status 1
addons_test.go:1039: failed waiting for amd-gpu-device-plugin pod: name=amd-gpu-device-plugin within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-051783
helpers_test.go:243: (dbg) docker inspect addons-051783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	        "Created": "2025-09-29T08:29:49.784096917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 388185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:29:49.817498779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/hosts",
	        "LogPath": "/var/lib/docker/containers/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24/d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24-json.log",
	        "Name": "/addons-051783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-051783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-051783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5025459b8313375032188dd860d22cb07b3b356100849a0ba0e302fcc37ed24",
	                "LowerDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1063fc4b399ea2bd5880e813de0d7384c434fa00d554d8dee256c89adf5036e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-051783",
	                "Source": "/var/lib/docker/volumes/addons-051783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-051783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-051783",
	                "name.minikube.sigs.k8s.io": "addons-051783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "047419f5f1ab31c122f731e4981df640cdefbc71a38b2a98a0269c254b8b5147",
	            "SandboxKey": "/var/run/docker/netns/047419f5f1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-051783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:72:c6:39:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0a6b532c24ef61399a92b99bcc9c2c11ccb6f875b789fadd5474d59e3dfaa8b",
	                    "EndpointID": "1838c1e0213d9bfb41a2e140fea05dd9b5a4866fea7930ce517a2c020e4c5b9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-051783",
	                        "d5025459b831"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-051783 -n addons-051783
helpers_test.go:252: <<< TestAddons/parallel/AmdGpuDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 logs -n 25: (1.385668336s)
helpers_test.go:260: TestAddons/parallel/AmdGpuDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-575596   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-749576                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-749576   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-084266                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-084266 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-867285                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-867285   │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ start   │ -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ enable headlamp -p addons-051783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:36 UTC │ 29 Sep 25 08:36 UTC │
	│ addons  │ addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ addons  │ addons-051783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:37 UTC │ 29 Sep 25 08:37 UTC │
	│ ip      │ addons-051783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:38 UTC │ 29 Sep 25 08:38 UTC │
	│ addons  │ addons-051783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:39 UTC │ 29 Sep 25 08:41 UTC │
	│ addons  │ addons-051783 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-051783          │ jenkins │ v1.37.0 │ 29 Sep 25 08:41 UTC │ 29 Sep 25 08:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:26.048391  387539 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:26.048698  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048709  387539 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:26.048715  387539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:26.048947  387539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:26.049570  387539 out.go:368] Setting JSON to false
	I0929 08:29:26.050522  387539 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7915,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:26.050623  387539 start.go:140] virtualization: kvm guest
	I0929 08:29:26.052691  387539 out.go:179] * [addons-051783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:26.053951  387539 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:29:26.053949  387539 notify.go:220] Checking for updates...
	I0929 08:29:26.056443  387539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:26.057666  387539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:26.058965  387539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:26.060266  387539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:29:26.061458  387539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:29:26.062925  387539 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:26.085693  387539 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:26.085842  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.138374  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.129030053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.138489  387539 docker.go:318] overlay module found
	I0929 08:29:26.140424  387539 out.go:179] * Using the docker driver based on user configuration
	I0929 08:29:26.141686  387539 start.go:304] selected driver: docker
	I0929 08:29:26.141705  387539 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:26.141717  387539 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:29:26.142365  387539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:26.198070  387539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 08:29:26.188331621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:26.198307  387539 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:26.198590  387539 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:29:26.200386  387539 out.go:179] * Using Docker driver with root privileges
	I0929 08:29:26.201498  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:26.201578  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:26.201592  387539 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:26.201692  387539 start.go:348] cluster config:
	{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 08:29:26.202985  387539 out.go:179] * Starting "addons-051783" primary control-plane node in "addons-051783" cluster
	I0929 08:29:26.204068  387539 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:26.205294  387539 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:29:26.206376  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:26.206412  387539 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:26.206422  387539 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:26.206482  387539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:26.206520  387539 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:29:26.206532  387539 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:29:26.206899  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:26.206927  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json: {Name:mk2a286bc12b96a7a99203a2062747f0cef91a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:26.223250  387539 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:26.223398  387539 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:26.223419  387539 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 08:29:26.223423  387539 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 08:29:26.223433  387539 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:26.223443  387539 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 08:29:38.381567  387539 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 08:29:38.381612  387539 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:29:38.381692  387539 start.go:360] acquireMachinesLock for addons-051783: {Name:mk2e012788fca6778bd19d14926129f41648dfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:29:38.381939  387539 start.go:364] duration metric: took 219.203µs to acquireMachinesLock for "addons-051783"
	I0929 08:29:38.381976  387539 start.go:93] Provisioning new machine with config: &{Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:29:38.382063  387539 start.go:125] createHost starting for "" (driver="docker")
	I0929 08:29:38.383873  387539 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 08:29:38.384110  387539 start.go:159] libmachine.API.Create for "addons-051783" (driver="docker")
	I0929 08:29:38.384143  387539 client.go:168] LocalClient.Create starting
	I0929 08:29:38.384255  387539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem
	I0929 08:29:38.717409  387539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem
	I0929 08:29:39.058441  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 08:29:39.075697  387539 cli_runner.go:211] docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 08:29:39.075776  387539 network_create.go:284] running [docker network inspect addons-051783] to gather additional debugging logs...
	I0929 08:29:39.075797  387539 cli_runner.go:164] Run: docker network inspect addons-051783
	W0929 08:29:39.093367  387539 cli_runner.go:211] docker network inspect addons-051783 returned with exit code 1
	I0929 08:29:39.093407  387539 network_create.go:287] error running [docker network inspect addons-051783]: docker network inspect addons-051783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-051783 not found
	I0929 08:29:39.093422  387539 network_create.go:289] output of [docker network inspect addons-051783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-051783 not found
	
	** /stderr **
	I0929 08:29:39.093524  387539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:39.112614  387539 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10860}
	I0929 08:29:39.112659  387539 network_create.go:124] attempt to create docker network addons-051783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 08:29:39.112709  387539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-051783 addons-051783
	I0929 08:29:39.172396  387539 network_create.go:108] docker network addons-051783 192.168.49.0/24 created
	I0929 08:29:39.172433  387539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-051783" container
	I0929 08:29:39.172502  387539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 08:29:39.190245  387539 cli_runner.go:164] Run: docker volume create addons-051783 --label name.minikube.sigs.k8s.io=addons-051783 --label created_by.minikube.sigs.k8s.io=true
	I0929 08:29:39.209341  387539 oci.go:103] Successfully created a docker volume addons-051783
	I0929 08:29:39.209430  387539 cli_runner.go:164] Run: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 08:29:45.546598  387539 cli_runner.go:217] Completed: docker run --rm --name addons-051783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --entrypoint /usr/bin/test -v addons-051783:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.337124509s)
	I0929 08:29:45.546633  387539 oci.go:107] Successfully prepared a docker volume addons-051783
	I0929 08:29:45.546654  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:45.546683  387539 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 08:29:45.546737  387539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 08:29:49.714226  387539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-051783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167437965s)
	I0929 08:29:49.714268  387539 kic.go:203] duration metric: took 4.167582619s to extract preloaded images to volume ...
	W0929 08:29:49.714368  387539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 08:29:49.714404  387539 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 08:29:49.714455  387539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 08:29:49.767111  387539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-051783 --name addons-051783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-051783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-051783 --network addons-051783 --ip 192.168.49.2 --volume addons-051783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 08:29:50.031579  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Running}}
	I0929 08:29:50.049810  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.068448  387539 cli_runner.go:164] Run: docker exec addons-051783 stat /var/lib/dpkg/alternatives/iptables
	I0929 08:29:50.119527  387539 oci.go:144] the created container "addons-051783" has a running status.
	I0929 08:29:50.119561  387539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa...
	I0929 08:29:50.320586  387539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 08:29:50.349341  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.370499  387539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 08:29:50.370528  387539 kic_runner.go:114] Args: [docker exec --privileged addons-051783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 08:29:50.419544  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:29:50.438350  387539 machine.go:93] provisionDockerMachine start ...
	I0929 08:29:50.438444  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.459048  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.459374  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.459393  387539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:29:50.596058  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.596100  387539 ubuntu.go:182] provisioning hostname "addons-051783"
	I0929 08:29:50.596175  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.615278  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.615589  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.615612  387539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-051783 && echo "addons-051783" | sudo tee /etc/hostname
	I0929 08:29:50.766108  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-051783
	
	I0929 08:29:50.766195  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:50.785560  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:50.785774  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:50.785791  387539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-051783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-051783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-051783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:29:50.924619  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:29:50.924652  387539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:29:50.924674  387539 ubuntu.go:190] setting up certificates
	I0929 08:29:50.924687  387539 provision.go:84] configureAuth start
	I0929 08:29:50.924737  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:50.943329  387539 provision.go:143] copyHostCerts
	I0929 08:29:50.943421  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:29:50.943556  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:29:50.943643  387539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:29:50.943713  387539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.addons-051783 san=[127.0.0.1 192.168.49.2 addons-051783 localhost minikube]
	I0929 08:29:51.148195  387539 provision.go:177] copyRemoteCerts
	I0929 08:29:51.148260  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:29:51.148304  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.166345  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.264074  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 08:29:51.290856  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:29:51.316758  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:29:51.341889  387539 provision.go:87] duration metric: took 417.187234ms to configureAuth
	I0929 08:29:51.341922  387539 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:29:51.342090  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:29:51.342194  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.359952  387539 main.go:141] libmachine: Using SSH client type: native
	I0929 08:29:51.360170  387539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0929 08:29:51.360189  387539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:29:51.599614  387539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:29:51.599641  387539 machine.go:96] duration metric: took 1.161262882s to provisionDockerMachine
	I0929 08:29:51.599653  387539 client.go:171] duration metric: took 13.215501429s to LocalClient.Create
	I0929 08:29:51.599668  387539 start.go:167] duration metric: took 13.215557799s to libmachine.API.Create "addons-051783"
	I0929 08:29:51.599677  387539 start.go:293] postStartSetup for "addons-051783" (driver="docker")
	I0929 08:29:51.599688  387539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:29:51.599774  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:29:51.599856  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.618351  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.717587  387539 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:29:51.721317  387539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:29:51.721352  387539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:29:51.721363  387539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:29:51.721372  387539 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:29:51.721390  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:29:51.721462  387539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:29:51.721495  387539 start.go:296] duration metric: took 121.8109ms for postStartSetup
	I0929 08:29:51.721801  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.739650  387539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/config.json ...
	I0929 08:29:51.740046  387539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:29:51.740104  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.758050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.851192  387539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:29:51.855723  387539 start.go:128] duration metric: took 13.4736408s to createHost
	I0929 08:29:51.855753  387539 start.go:83] releasing machines lock for "addons-051783", held for 13.47379323s
	I0929 08:29:51.855844  387539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-051783
	I0929 08:29:51.873999  387539 ssh_runner.go:195] Run: cat /version.json
	I0929 08:29:51.874046  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.874101  387539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:29:51.874186  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:29:51.892677  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.892826  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:29:51.984022  387539 ssh_runner.go:195] Run: systemctl --version
	I0929 08:29:52.057018  387539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:29:52.197504  387539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:29:52.202664  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.226004  387539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:29:52.226089  387539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:29:52.256267  387539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 08:29:52.256294  387539 start.go:495] detecting cgroup driver to use...
	I0929 08:29:52.256336  387539 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:29:52.256387  387539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:29:52.272062  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:29:52.284075  387539 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:29:52.284139  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:29:52.297608  387539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:29:52.311496  387539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:29:52.379434  387539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:29:52.452878  387539 docker.go:234] disabling docker service ...
	I0929 08:29:52.452951  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:29:52.471190  387539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:29:52.482728  387539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:29:52.553081  387539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:29:52.660824  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:29:52.672658  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:29:52.689950  387539 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I0929 08:29:53.606681  387539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:29:53.606744  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.620746  387539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:29:53.620827  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.632032  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.642692  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.653396  387539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:29:53.663250  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.673800  387539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.690677  387539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:29:53.701296  387539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:29:53.710748  387539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:29:53.720068  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:53.822567  387539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:29:54.052148  387539 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:29:54.052242  387539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:29:54.056279  387539 start.go:563] Will wait 60s for crictl version
	I0929 08:29:54.056335  387539 ssh_runner.go:195] Run: which crictl
	I0929 08:29:54.059686  387539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:29:54.093633  387539 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:29:54.093726  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.130572  387539 ssh_runner.go:195] Run: crio --version
	I0929 08:29:54.167704  387539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:29:54.169060  387539 cli_runner.go:164] Run: docker network inspect addons-051783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:29:54.186559  387539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:29:54.190730  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:54.202692  387539 kubeadm.go:875] updating cluster {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:29:54.202909  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.337502  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.468366  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.649435  387539 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:29:54.649610  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.777589  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:54.915339  387539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:29:55.048055  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.117941  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.117965  387539 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:29:55.118025  387539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:29:55.154367  387539 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:29:55.154391  387539 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:29:55.154401  387539 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I0929 08:29:55.154505  387539 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-051783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:29:55.154591  387539 ssh_runner.go:195] Run: crio config
	I0929 08:29:55.197157  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:29:55.197179  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:55.197193  387539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:29:55.197222  387539 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-051783 NodeName:addons-051783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:29:55.197413  387539 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-051783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:29:55.197493  387539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:29:55.207525  387539 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:29:55.207613  387539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:29:55.217221  387539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 08:29:55.235810  387539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:29:55.258594  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 08:29:55.277991  387539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:29:55.281790  387539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 08:29:55.293204  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:29:55.360353  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:29:55.382375  387539 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783 for IP: 192.168.49.2
	I0929 08:29:55.382400  387539 certs.go:194] generating shared ca certs ...
	I0929 08:29:55.382416  387539 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.382548  387539 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:29:55.651560  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt ...
	I0929 08:29:55.651593  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt: {Name:mk53fbf30de594b3575593db0eac7c74aa2a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651775  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key ...
	I0929 08:29:55.651787  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key: {Name:mk35c377f1d90bf347db7dc4624ea5b41f2dcae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:55.651874  387539 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:29:56.010531  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt ...
	I0929 08:29:56.010572  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt: {Name:mkabe28787fe5521225369fcdd8a8684c242d367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010810  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key ...
	I0929 08:29:56.010828  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key: {Name:mk151240dae8e83bb981e456caae01db62eb2077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.010954  387539 certs.go:256] generating profile certs ...
	I0929 08:29:56.011050  387539 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key
	I0929 08:29:56.011071  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt with IP's: []
	I0929 08:29:56.156766  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt ...
	I0929 08:29:56.156798  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: {Name:mk9b8f8dd7c08d896eb2f2a24df27c4df7b8a87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157020  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key ...
	I0929 08:29:56.157045  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.key: {Name:mk413d2883ee03859619bae9a6ad426c2dac294b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.157158  387539 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d
	I0929 08:29:56.157188  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 08:29:56.672467  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d ...
	I0929 08:29:56.672506  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d: {Name:mka498a3f60495ba4009bb038cca767d64e6d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672723  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d ...
	I0929 08:29:56.672747  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d: {Name:mkd42036f907b80afa6962c66b97c00a14ed475b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:56.672879  387539 certs.go:381] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt
	I0929 08:29:56.672993  387539 certs.go:385] copying /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key.fb19d06d -> /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key
	I0929 08:29:56.673074  387539 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key
	I0929 08:29:56.673103  387539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt with IP's: []
	I0929 08:29:57.054367  387539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt ...
	I0929 08:29:57.054403  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt: {Name:mk108739363f385844a88df9ec106753ae771d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054593  387539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key ...
	I0929 08:29:57.054605  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key: {Name:mk26b223288f2fd31a6e78b544277cdc3d5192ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:57.054865  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:29:57.054909  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:29:57.054936  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:29:57.054959  387539 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:29:57.055530  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:29:57.081419  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:29:57.107158  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:29:57.132325  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:29:57.157699  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 08:29:57.182851  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 08:29:57.207862  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:29:57.233471  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 08:29:57.258657  387539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:29:57.286501  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:29:57.305136  387539 ssh_runner.go:195] Run: openssl version
	I0929 08:29:57.310898  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:29:57.323725  387539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327458  387539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.327527  387539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:29:57.334303  387539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:29:57.344385  387539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:29:57.347990  387539 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 08:29:57.348046  387539 kubeadm.go:392] StartCluster: {Name:addons-051783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-051783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:57.348116  387539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:29:57.348159  387539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:29:57.385638  387539 cri.go:89] found id: ""
	I0929 08:29:57.385716  387539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:29:57.395454  387539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:29:57.405038  387539 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 08:29:57.405100  387539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:29:57.414685  387539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 08:29:57.414705  387539 kubeadm.go:157] found existing configuration files:
	
	I0929 08:29:57.414765  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 08:29:57.424091  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 08:29:57.424158  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 08:29:57.433341  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 08:29:57.442616  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 08:29:57.442679  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:29:57.451665  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.460943  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 08:29:57.461008  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:29:57.470122  387539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 08:29:57.479257  387539 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 08:29:57.479340  387539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:29:57.488496  387539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 08:29:57.543664  387539 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 08:29:57.607707  387539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 08:30:06.732943  387539 kubeadm.go:310] [init] Using Kubernetes version: v1.34.1
	I0929 08:30:06.732999  387539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 08:30:06.733103  387539 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 08:30:06.733192  387539 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 08:30:06.733241  387539 kubeadm.go:310] OS: Linux
	I0929 08:30:06.733332  387539 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 08:30:06.733405  387539 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 08:30:06.733457  387539 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 08:30:06.733497  387539 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 08:30:06.733545  387539 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 08:30:06.733624  387539 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 08:30:06.733688  387539 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 08:30:06.733751  387539 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 08:30:06.733912  387539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 08:30:06.734049  387539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 08:30:06.734125  387539 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 08:30:06.734176  387539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 08:30:06.736008  387539 out.go:252]   - Generating certificates and keys ...
	I0929 08:30:06.736074  387539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 08:30:06.736130  387539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 08:30:06.736184  387539 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 08:30:06.736237  387539 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 08:30:06.736289  387539 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 08:30:06.736356  387539 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 08:30:06.736446  387539 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 08:30:06.736584  387539 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736671  387539 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 08:30:06.736803  387539 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-051783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 08:30:06.736949  387539 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 08:30:06.737047  387539 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 08:30:06.737115  387539 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 08:30:06.737192  387539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 08:30:06.737274  387539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 08:30:06.737358  387539 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 08:30:06.737431  387539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 08:30:06.737517  387539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 08:30:06.737617  387539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 08:30:06.737730  387539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 08:30:06.737805  387539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 08:30:06.739945  387539 out.go:252]   - Booting up control plane ...
	I0929 08:30:06.740037  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 08:30:06.740106  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 08:30:06.740177  387539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 08:30:06.740270  387539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 08:30:06.740362  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 08:30:06.740460  387539 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 08:30:06.740572  387539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 08:30:06.740634  387539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 08:30:06.740771  387539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 08:30:06.740901  387539 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 08:30:06.740969  387539 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.961891ms
	I0929 08:30:06.741050  387539 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 08:30:06.741148  387539 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 08:30:06.741256  387539 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 08:30:06.741361  387539 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 08:30:06.741468  387539 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.198584202s
	I0929 08:30:06.741557  387539 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.20667671s
	I0929 08:30:06.741647  387539 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002286434s
	I0929 08:30:06.741774  387539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 08:30:06.741941  387539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 08:30:06.741998  387539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 08:30:06.742173  387539 kubeadm.go:310] [mark-control-plane] Marking the node addons-051783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 08:30:06.742236  387539 kubeadm.go:310] [bootstrap-token] Using token: sez7z1.jh96okhowb57z8tt
	I0929 08:30:06.743877  387539 out.go:252]   - Configuring RBAC rules ...
	I0929 08:30:06.743987  387539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 08:30:06.744079  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 08:30:06.744207  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 08:30:06.744316  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 08:30:06.744423  387539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 08:30:06.744505  387539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 08:30:06.744607  387539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 08:30:06.744646  387539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 08:30:06.744689  387539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 08:30:06.744695  387539 kubeadm.go:310] 
	I0929 08:30:06.744746  387539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 08:30:06.744752  387539 kubeadm.go:310] 
	I0929 08:30:06.744820  387539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 08:30:06.744826  387539 kubeadm.go:310] 
	I0929 08:30:06.744869  387539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 08:30:06.744924  387539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 08:30:06.744972  387539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 08:30:06.744978  387539 kubeadm.go:310] 
	I0929 08:30:06.745052  387539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 08:30:06.745066  387539 kubeadm.go:310] 
	I0929 08:30:06.745135  387539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 08:30:06.745149  387539 kubeadm.go:310] 
	I0929 08:30:06.745232  387539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 08:30:06.745306  387539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 08:30:06.745369  387539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 08:30:06.745377  387539 kubeadm.go:310] 
	I0929 08:30:06.745445  387539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 08:30:06.745514  387539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 08:30:06.745520  387539 kubeadm.go:310] 
	I0929 08:30:06.745584  387539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745665  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 \
	I0929 08:30:06.745690  387539 kubeadm.go:310] 	--control-plane 
	I0929 08:30:06.745699  387539 kubeadm.go:310] 
	I0929 08:30:06.745764  387539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 08:30:06.745774  387539 kubeadm.go:310] 
	I0929 08:30:06.745853  387539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sez7z1.jh96okhowb57z8tt \
	I0929 08:30:06.745968  387539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c89d1bcba7bf112ef80db099da20c614f299d3d700bfbbd45746fd061bd58fe0 
	I0929 08:30:06.745984  387539 cni.go:84] Creating CNI manager for ""
	I0929 08:30:06.745992  387539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:30:06.748010  387539 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:30:06.749332  387539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:30:06.753814  387539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:30:06.753848  387539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:30:06.772879  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:30:06.985959  387539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:30:06.986041  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:06.986104  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-051783 minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78 minikube.k8s.io/name=addons-051783 minikube.k8s.io/primary=true
	I0929 08:30:06.996442  387539 ops.go:34] apiserver oom_adj: -16
	I0929 08:30:07.062951  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:07.563693  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.063933  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:08.563857  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.063020  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:09.563145  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.063764  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:10.564058  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.063584  387539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 08:30:11.131479  387539 kubeadm.go:1105] duration metric: took 4.145485124s to wait for elevateKubeSystemPrivileges
	I0929 08:30:11.131516  387539 kubeadm.go:394] duration metric: took 13.783475405s to StartCluster
	I0929 08:30:11.131536  387539 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.131680  387539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:30:11.132107  387539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:30:11.132380  387539 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:30:11.132425  387539 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 08:30:11.132561  387539 addons.go:69] Setting yakd=true in profile "addons-051783"
	I0929 08:30:11.132586  387539 addons.go:238] Setting addon yakd=true in "addons-051783"
	I0929 08:30:11.132592  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.132625  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132389  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 08:30:11.132650  387539 addons.go:69] Setting default-storageclass=true in profile "addons-051783"
	I0929 08:30:11.132650  387539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-051783"
	I0929 08:30:11.132651  387539 addons.go:69] Setting registry-creds=true in profile "addons-051783"
	I0929 08:30:11.132672  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-051783"
	I0929 08:30:11.132675  387539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-051783"
	I0929 08:30:11.132684  387539 addons.go:238] Setting addon registry-creds=true in "addons-051783"
	I0929 08:30:11.132675  387539 addons.go:69] Setting storage-provisioner=true in profile "addons-051783"
	I0929 08:30:11.132723  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132729  387539 addons.go:69] Setting gcp-auth=true in profile "addons-051783"
	I0929 08:30:11.132737  387539 addons.go:69] Setting ingress=true in profile "addons-051783"
	I0929 08:30:11.132749  387539 addons.go:238] Setting addon ingress=true in "addons-051783"
	I0929 08:30:11.132751  387539 mustload.go:65] Loading cluster: addons-051783
	I0929 08:30:11.132786  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.132903  387539 addons.go:69] Setting ingress-dns=true in profile "addons-051783"
	I0929 08:30:11.132921  387539 addons.go:238] Setting addon ingress-dns=true in "addons-051783"
	I0929 08:30:11.132932  387539 config.go:182] Loaded profile config "addons-051783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:30:11.133022  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133038  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133039  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133154  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133198  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133236  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133242  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133465  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.134910  387539 addons.go:69] Setting metrics-server=true in profile "addons-051783"
	I0929 08:30:11.134935  387539 addons.go:238] Setting addon metrics-server=true in "addons-051783"
	I0929 08:30:11.134966  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.135401  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.133500  387539 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.136449  387539 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-051783"
	I0929 08:30:11.136484  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.136993  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.137446  387539 addons.go:69] Setting registry=true in profile "addons-051783"
	I0929 08:30:11.137472  387539 addons.go:238] Setting addon registry=true in "addons-051783"
	I0929 08:30:11.137504  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.137785  387539 out.go:179] * Verifying Kubernetes components...
	I0929 08:30:11.132620  387539 addons.go:69] Setting inspektor-gadget=true in profile "addons-051783"
	I0929 08:30:11.137998  387539 addons.go:238] Setting addon inspektor-gadget=true in "addons-051783"
	I0929 08:30:11.138030  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.138040  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.138478  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.132724  387539 addons.go:238] Setting addon storage-provisioner=true in "addons-051783"
	I0929 08:30:11.138872  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.133573  387539 addons.go:69] Setting volcano=true in profile "addons-051783"
	I0929 08:30:11.133608  387539 addons.go:69] Setting volumesnapshots=true in profile "addons-051783"
	I0929 08:30:11.133632  387539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-051783"
	I0929 08:30:11.133523  387539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-051783"
	I0929 08:30:11.133512  387539 addons.go:69] Setting cloud-spanner=true in profile "addons-051783"
	I0929 08:30:11.139071  387539 addons.go:238] Setting addon cloud-spanner=true in "addons-051783"
	I0929 08:30:11.139164  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139273  387539 addons.go:238] Setting addon volumesnapshots=true in "addons-051783"
	I0929 08:30:11.139284  387539 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-051783"
	I0929 08:30:11.139311  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.139319  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140056  387539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:30:11.140193  387539 addons.go:238] Setting addon volcano=true in "addons-051783"
	I0929 08:30:11.140204  387539 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:11.140225  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.140228  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.146698  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147224  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.147394  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149077  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.149662  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.151164  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.176264  387539 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 08:30:11.181229  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 08:30:11.181264  387539 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 08:30:11.181355  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.198928  387539 addons.go:238] Setting addon default-storageclass=true in "addons-051783"
	I0929 08:30:11.198980  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.200501  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.202621  387539 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 08:30:11.202751  387539 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 08:30:11.204060  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 08:30:11.204203  387539 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 08:30:11.204287  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.204590  387539 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 08:30:11.206350  387539 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 08:30:11.206413  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 08:30:11.206494  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	W0929 08:30:11.215084  387539 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 08:30:11.220539  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.228994  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 08:30:11.229058  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.230311  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 08:30:11.230348  387539 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 08:30:11.230415  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.230456  387539 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 08:30:11.232483  387539 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-051783"
	I0929 08:30:11.232653  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:11.234514  387539 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.234537  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 08:30:11.234593  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.236276  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:11.238980  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 08:30:11.240948  387539 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 08:30:11.242224  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:11.242345  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.242360  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 08:30:11.242423  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.249763  387539 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.249815  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 08:30:11.249988  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.251632  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 08:30:11.252713  387539 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 08:30:11.256731  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 08:30:11.256909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.256925  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 08:30:11.257007  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.259232  387539 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 08:30:11.259246  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 08:30:11.261351  387539 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.261383  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 08:30:11.261446  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.261602  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 08:30:11.261990  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.264208  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 08:30:11.265661  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 08:30:11.266953  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 08:30:11.268988  387539 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 08:30:11.269090  387539 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 08:30:11.270103  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.270359  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 08:30:11.270376  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 08:30:11.270435  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.270601  387539 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 08:30:11.270610  387539 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 08:30:11.270648  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.275993  387539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:30:11.282092  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.282115  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:30:11.282181  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.285473  387539 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 08:30:11.290090  387539 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.291158  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 08:30:11.295912  387539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 08:30:11.295961  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.299675  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.313891  387539 out.go:179]   - Using image docker.io/busybox:stable
	I0929 08:30:11.315473  387539 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 08:30:11.316814  387539 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.316848  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 08:30:11.316910  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.317050  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.323553  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.332930  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.335659  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.338799  387539 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:11.338893  387539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:30:11.338992  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:11.348819  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.349921  387539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:30:11.354726  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.358638  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.365096  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.375197  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.379217  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	W0929 08:30:11.383998  387539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 08:30:11.384044  387539 retry.go:31] will retry after 372.305387ms: ssh: handshake failed: EOF
	I0929 08:30:11.384985  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.385740  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:11.455618  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 08:30:11.455652  387539 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 08:30:11.483956  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 08:30:11.483993  387539 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 08:30:11.501077  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 08:30:11.501104  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 08:30:11.512909  387539 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 08:30:11.512936  387539 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 08:30:11.513909  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 08:30:11.513933  387539 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 08:30:11.522184  387539 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.522210  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 08:30:11.532474  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 08:30:11.547827  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 08:30:11.549888  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 08:30:11.549921  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 08:30:11.551406  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 08:30:11.551429  387539 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 08:30:11.551604  387539 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.551620  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 08:30:11.562054  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 08:30:11.567658  387539 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.567682  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 08:30:11.568342  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:11.575483  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 08:30:11.579024  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 08:30:11.580084  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 08:30:11.589345  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 08:30:11.589374  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 08:30:11.591142  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:30:11.596651  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 08:30:11.617511  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 08:30:11.639242  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 08:30:11.639268  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 08:30:11.640436  387539 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.640457  387539 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 08:30:11.676132  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 08:30:11.683757  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 08:30:11.683933  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 08:30:11.694476  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 08:30:11.733321  387539 node_ready.go:35] waiting up to 6m0s for node "addons-051783" to be "Ready" ...
	I0929 08:30:11.737381  387539 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 08:30:11.737409  387539 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 08:30:11.739451  387539 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 08:30:11.742034  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 08:30:11.742058  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 08:30:11.860616  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 08:30:11.860647  387539 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 08:30:11.867313  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 08:30:11.867348  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 08:30:11.967456  387539 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:11.967489  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 08:30:11.972315  387539 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 08:30:11.972363  387539 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 08:30:12.022878  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:30:12.038007  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 08:30:12.038036  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 08:30:12.049218  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 08:30:12.116439  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 08:30:12.116470  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 08:30:12.218447  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 08:30:12.218482  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 08:30:12.270160  387539 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-051783" context rescaled to 1 replicas
	I0929 08:30:12.276753  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 08:30:12.276954  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 08:30:12.325380  387539 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.325408  387539 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 08:30:12.363377  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 08:30:12.640545  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.07217093s)
	W0929 08:30:12.640603  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640631  387539 retry.go:31] will retry after 237.04452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:12.640719  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.065212731s)
	I0929 08:30:12.641043  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.061988054s)
	I0929 08:30:12.641104  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.060998244s)
	I0929 08:30:12.641174  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049961126s)
	I0929 08:30:12.837190  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.240492795s)
	I0929 08:30:12.837239  387539 addons.go:479] Verifying addon ingress=true in "addons-051783"
	I0929 08:30:12.837345  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.219781667s)
	I0929 08:30:12.837419  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.161075095s)
	I0929 08:30:12.837447  387539 addons.go:479] Verifying addon registry=true in "addons-051783"
	I0929 08:30:12.837566  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.142937066s)
	I0929 08:30:12.837594  387539 addons.go:479] Verifying addon metrics-server=true in "addons-051783"
	I0929 08:30:12.839983  387539 out.go:179] * Verifying ingress addon...
	I0929 08:30:12.839983  387539 out.go:179] * Verifying registry addon...
	I0929 08:30:12.839983  387539 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-051783 service yakd-dashboard -n yakd-dashboard
	
	I0929 08:30:12.842161  387539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 08:30:12.843164  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 08:30:12.846165  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:12.846189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:12.846718  387539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 08:30:12.846741  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:12.878020  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.347067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.347316  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.444185  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.394912895s)
	W0929 08:30:13.444269  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444303  387539 retry.go:31] will retry after 148.150087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 08:30:13.444442  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991087s)
	I0929 08:30:13.444483  387539 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-051783"
	I0929 08:30:13.446118  387539 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 08:30:13.448654  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 08:30:13.452016  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:13.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:13.577429  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.577457  387539 retry.go:31] will retry after 254.552952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:13.593694  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 08:30:13.737433  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:13.832408  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:13.846313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:13.846455  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:13.952328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.346125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.346258  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:14.845799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:14.845811  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:14.951680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.346030  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.346221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.453724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:15.845371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:15.845746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:15.952128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.053703  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459968372s)
	I0929 08:30:16.053810  387539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.22138062s)
	W0929 08:30:16.053859  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:16.053883  387539 retry.go:31] will retry after 481.367348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:16.235952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:16.346141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.346415  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.452678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:16.535851  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:16.846177  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:16.846299  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:16.951988  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:17.090051  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.090084  387539 retry.go:31] will retry after 480.173629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:17.345653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.345864  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.453018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:17.571186  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:17.846646  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:17.846705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:17.952363  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:18.133672  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:18.133711  387539 retry.go:31] will retry after 1.605452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:30:18.236698  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:18.345996  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.346227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.831696  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 08:30:18.831773  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:18.846470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:18.846549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:18.851454  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:18.951695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:18.969096  387539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 08:30:18.989016  387539 addons.go:238] Setting addon gcp-auth=true in "addons-051783"
	I0929 08:30:18.989103  387539 host.go:66] Checking if "addons-051783" exists ...
	I0929 08:30:18.989486  387539 cli_runner.go:164] Run: docker container inspect addons-051783 --format={{.State.Status}}
	I0929 08:30:19.008865  387539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 08:30:19.008932  387539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-051783
	I0929 08:30:19.027173  387539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/addons-051783/id_rsa Username:docker}
	I0929 08:30:19.120755  387539 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 08:30:19.121923  387539 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 08:30:19.122900  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 08:30:19.122919  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 08:30:19.143102  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 08:30:19.143126  387539 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 08:30:19.162866  387539 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.162888  387539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 08:30:19.183136  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 08:30:19.346348  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.346554  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.453192  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:19.501972  387539 addons.go:479] Verifying addon gcp-auth=true in "addons-051783"
	I0929 08:30:19.503639  387539 out.go:179] * Verifying gcp-auth addon...
	I0929 08:30:19.505850  387539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 08:30:19.554509  387539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 08:30:19.554531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:19.740347  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:19.845786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:19.845969  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:19.951989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.008598  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.299545  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.299581  387539 retry.go:31] will retry after 1.544699875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:20.345964  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.452158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:20.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:20.736317  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:20.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:20.845861  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:20.951742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.009815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.346000  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.346032  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.451989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:21.508685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:21.845176  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:21.845841  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:21.846114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:21.952278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.009273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:22.345019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.346075  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 08:30:22.403582  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.403621  387539 retry.go:31] will retry after 3.049515308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:22.452614  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:22.512271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:22.736403  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:22.845553  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:22.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:22.951921  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.010165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.345659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.345820  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.451629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:23.509351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:23.846115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:23.846228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:23.952047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.008926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:24.346005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.452131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:24.509321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:24.737273  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:24.845357  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:24.845622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:24.951671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.010110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.346716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.346788  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:25.453468  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:25.510278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:25.845392  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:25.845982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:25.951775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 08:30:26.006239  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.006394  387539 retry.go:31] will retry after 2.506202781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:26.008893  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.346077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.346300  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.452870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:26.510002  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:26.845936  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:26.846437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:26.952599  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.010142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:27.237031  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:27.345974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.346037  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.451702  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:27.509719  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:27.845995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:27.846262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:27.952122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.008966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.345646  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.346068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.452500  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:28.509096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:28.513240  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:28.845526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:28.845724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:28.952636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.009980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.073172  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.073204  387539 retry.go:31] will retry after 5.087993961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:29.345624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.345890  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.451566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:29.509314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:29.736247  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:29.845167  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:29.845589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:29.952470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.009285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.345961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.346228  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.451762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:30.509671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:30.845660  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:30.845938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:30.951757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.010434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:31.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.346159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.452024  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:31.508639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:31.736734  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:31.845802  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:31.846069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:31.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.008631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.345183  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.345554  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.452360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:32.509283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:32.846011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:32.846198  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:32.952029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.008505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.345468  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.346184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:33.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:33.845492  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:33.845973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:33.951615  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.009499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:34.161747  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 08:30:34.236880  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:34.346017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.346168  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.451966  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:34.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:34.713989  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.714029  387539 retry.go:31] will retry after 10.074915141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:34.846205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:34.846262  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:34.952041  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.009299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.346101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.346147  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.452133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:35.508814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:35.845885  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:35.846022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:35.952026  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.008870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:36.345968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.346092  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.452038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:36.508708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:36.736573  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:36.845946  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:36.846138  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:36.951934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.010147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.345611  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.346391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.452092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:37.508537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:37.845236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:37.845710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:37.951391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.009185  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:38.345379  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.345497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.452268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:38.509054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:38.736952  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:38.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:38.845942  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:38.951848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.009583  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.345482  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.345749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.452467  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:39.509234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:39.845877  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:39.845968  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:39.951690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:40.345848  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.346009  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.451555  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:40.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:40.737059  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:40.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:40.845985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:40.951632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.009343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.345541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.346172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.452233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:41.509214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:41.846040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:41.846112  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:41.951896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.009603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.345289  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.451783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:42.509700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:42.845799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:42.845983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:42.951967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.008596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:43.236598  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:43.346000  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.346147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.452087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:43.509013  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:43.846134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:43.846259  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:43.952036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.008744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.345998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.346244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.452116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:44.508722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:44.789668  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:44.848890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:44.848956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:44.952825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:45.346063  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.346265  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:30:45.349824  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.349902  387539 retry.go:31] will retry after 10.254228561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:45.451609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:45.509499  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:45.736311  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:45.845308  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:45.845508  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:45.952578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.345276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.345820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.451640  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:46.509515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:46.845665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:46.845801  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:46.951610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.009568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:47.346135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.347757  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.451685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:47.509687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:47.736659  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:47.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:47.846278  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:47.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.010881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.345580  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.346116  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.452054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:48.508539  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:48.845649  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:48.845738  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:48.951441  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.009204  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.345513  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.345678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.451528  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:49.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:49.845483  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:49.846049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:49.951870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.009622  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:50.236705  387539 node_ready.go:57] node "addons-051783" has "Ready":"False" status (will retry)
	I0929 08:30:50.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.346397  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.452090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:50.508959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:50.845410  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:50.846029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:50.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.008722  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.345637  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.346169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.452115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:51.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:51.845715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:51.845962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:51.951758  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.009370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.345902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.451699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:52.509385  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:52.735450  387539 node_ready.go:49] node "addons-051783" is "Ready"
	I0929 08:30:52.735486  387539 node_ready.go:38] duration metric: took 41.00212415s for node "addons-051783" to be "Ready" ...
	I0929 08:30:52.735510  387539 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:30:52.735569  387539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:30:52.754269  387539 api_server.go:72] duration metric: took 41.621848619s to wait for apiserver process to appear ...
	I0929 08:30:52.754302  387539 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:30:52.754329  387539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 08:30:52.758629  387539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 08:30:52.759566  387539 api_server.go:141] control plane version: v1.34.1
	I0929 08:30:52.759591  387539 api_server.go:131] duration metric: took 5.283085ms to wait for apiserver health ...
	I0929 08:30:52.759601  387539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:30:52.763531  387539 system_pods.go:59] 20 kube-system pods found
	I0929 08:30:52.763568  387539 system_pods.go:61] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.763584  387539 system_pods.go:61] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.763591  387539 system_pods.go:61] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.763598  387539 system_pods.go:61] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.763604  387539 system_pods.go:61] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.763610  387539 system_pods.go:61] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.763618  387539 system_pods.go:61] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.763625  387539 system_pods.go:61] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.763632  387539 system_pods.go:61] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.763646  387539 system_pods.go:61] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.763655  387539 system_pods.go:61] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.763661  387539 system_pods.go:61] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.763671  387539 system_pods.go:61] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.763677  387539 system_pods.go:61] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.763685  387539 system_pods.go:61] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.763695  387539 system_pods.go:61] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.763703  387539 system_pods.go:61] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.763711  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.763762  387539 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.763769  387539 system_pods.go:61] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.763779  387539 system_pods.go:74] duration metric: took 4.172047ms to wait for pod list to return data ...
	I0929 08:30:52.763792  387539 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:30:52.766094  387539 default_sa.go:45] found service account: "default"
	I0929 08:30:52.766121  387539 default_sa.go:55] duration metric: took 2.321933ms for default service account to be created ...
	I0929 08:30:52.766133  387539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:30:52.770696  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:52.770757  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending
	I0929 08:30:52.770770  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:52.770776  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending
	I0929 08:30:52.770784  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending
	I0929 08:30:52.770789  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending
	I0929 08:30:52.770794  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:52.770802  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:52.770808  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:52.770815  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:52.770824  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:52.770843  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:52.770851  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:52.770863  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:52.770872  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending
	I0929 08:30:52.770881  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:52.770891  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:52.770899  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:52.770908  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending
	I0929 08:30:52.770928  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:52.770935  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending
	I0929 08:30:52.770959  387539 retry.go:31] will retry after 296.951592ms: missing components: kube-dns
	I0929 08:30:52.847272  387539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 08:30:52.847306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:52.847283  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:52.956403  387539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 08:30:52.956428  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.058959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.074050  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.074084  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.074092  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.074102  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.074109  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.074114  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.074118  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.074124  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.074127  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.074131  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.074136  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.074139  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.074143  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.074148  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.074158  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.074162  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.074167  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.074171  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.074177  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074185  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.074189  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.074204  387539 retry.go:31] will retry after 260.486294ms: missing components: kube-dns
	I0929 08:30:53.340885  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.340928  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.340939  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.340949  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.340957  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.340970  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.340976  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.340984  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.340989  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.340994  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.341002  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.341007  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.341013  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.341020  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.341029  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.341037  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.341045  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.341052  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.341071  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341079  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.341086  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.341104  387539 retry.go:31] will retry after 402.781904ms: missing components: kube-dns
	I0929 08:30:53.345674  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.452965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:53.509656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:53.749539  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:53.749584  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:53.749596  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 08:30:53.749607  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:53.749615  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:53.749625  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:53.749637  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:53.749644  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:53.749652  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:53.749658  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:53.749673  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:53.749681  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:53.749688  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:53.749700  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:53.749713  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:53.749725  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:53.749741  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:53.749752  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:53.749760  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749772  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:53.749780  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 08:30:53.749803  387539 retry.go:31] will retry after 372.296454ms: missing components: kube-dns
	I0929 08:30:53.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:53.846351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:53.953470  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.009621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.127961  387539 system_pods.go:86] 20 kube-system pods found
	I0929 08:30:54.128007  387539 system_pods.go:89] "amd-gpu-device-plugin-xvf9b" [af4f61bf-c919-44d4-8d12-3579f9dce9c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 08:30:54.128016  387539 system_pods.go:89] "coredns-66bc5c9577-n8bx8" [1cdd872b-c580-4868-ba00-f802f974ca2d] Running
	I0929 08:30:54.128029  387539 system_pods.go:89] "csi-hostpath-attacher-0" [7388d726-6d5c-49c1-9c62-21090f7181ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 08:30:54.128037  387539 system_pods.go:89] "csi-hostpath-resizer-0" [26afab29-48bd-420d-8efb-5c9499bf2e5b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 08:30:54.128046  387539 system_pods.go:89] "csi-hostpathplugin-59n9q" [e94a145b-2c84-4ba7-a591-d3231213d031] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 08:30:54.128055  387539 system_pods.go:89] "etcd-addons-051783" [f6b9c60c-f2ed-4dcb-b3db-fcac3d67fae3] Running
	I0929 08:30:54.128068  387539 system_pods.go:89] "kindnet-47v7m" [1f3f9ab8-442a-498b-95c3-795b67b9937d] Running
	I0929 08:30:54.128073  387539 system_pods.go:89] "kube-apiserver-addons-051783" [eaea7fba-7666-4e74-8f6b-a6dea4c3c3e0] Running
	I0929 08:30:54.128080  387539 system_pods.go:89] "kube-controller-manager-addons-051783" [72e95b52-2d8b-40ae-b7a6-a5bb991d63bd] Running
	I0929 08:30:54.128094  387539 system_pods.go:89] "kube-ingress-dns-minikube" [ec159452-503b-4642-b822-ea6cdac8e16e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 08:30:54.128101  387539 system_pods.go:89] "kube-proxy-wbl7p" [a598e1fd-1af6-4562-8559-5faa531c8d56] Running
	I0929 08:30:54.128111  387539 system_pods.go:89] "kube-scheduler-addons-051783" [921ab7d2-3842-4a3a-9ac7-ba3801f03dc3] Running
	I0929 08:30:54.128119  387539 system_pods.go:89] "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 08:30:54.128131  387539 system_pods.go:89] "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 08:30:54.128140  387539 system_pods.go:89] "registry-66898fdd98-mpkgd" [47b28054-9887-4c42-a0ff-51664d1dcd69] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 08:30:54.128150  387539 system_pods.go:89] "registry-creds-764b6fb674-kg4xs" [c3b3d985-3d3a-4ad4-8138-ecdab68f0562] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 08:30:54.128156  387539 system_pods.go:89] "registry-proxy-n2gtf" [ef82ffa5-c021-4537-bfb2-367118ce12ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 08:30:54.128167  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n65gp" [d8bddc78-350d-45c5-9361-48262c9442a1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128182  387539 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xpkwb" [2dddaa6f-aee0-4f1b-9c24-9e006a4b5f3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 08:30:54.128190  387539 system_pods.go:89] "storage-provisioner" [605a4408-a454-4600-a587-7d951352dc66] Running
	I0929 08:30:54.128201  387539 system_pods.go:126] duration metric: took 1.362060932s to wait for k8s-apps to be running ...
	I0929 08:30:54.128214  387539 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:30:54.128269  387539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:30:54.143506  387539 system_svc.go:56] duration metric: took 15.282529ms WaitForService to wait for kubelet
	I0929 08:30:54.143541  387539 kubeadm.go:578] duration metric: took 43.011126136s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:30:54.143567  387539 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:30:54.146666  387539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:30:54.146694  387539 node_conditions.go:123] node cpu capacity is 8
	I0929 08:30:54.146710  387539 node_conditions.go:105] duration metric: took 3.13874ms to run NodePressure ...
	I0929 08:30:54.146723  387539 start.go:241] waiting for startup goroutines ...
	I0929 08:30:54.346096  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.346452  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.452512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:54.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:54.845681  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:54.846213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:54.952945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.009776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.346034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.346210  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.452987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:55.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:55.604936  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:30:55.845661  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:55.846303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:55.952647  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.009596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:30:56.227075  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.227117  387539 retry.go:31] will retry after 11.111742245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:30:56.346587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.346664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.452545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:56.509737  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:56.846282  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:56.846404  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:56.952291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.008904  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.346213  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.452947  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:57.553095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:57.845310  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:57.845536  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:57.952617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.009229  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.345911  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.452036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:58.509465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:58.846116  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:58.846300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:58.954223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.009020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.345799  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.345929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.451999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:30:59.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:30:59.846016  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:30:59.846048  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:30:59.951820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.009510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.346008  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.346043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.452095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:00.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:00.845635  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:00.846133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:00.952120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.008582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.346305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.346398  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.452779  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:01.509350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:01.845977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:01.846089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:01.951976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.009725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.346046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.346195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.452152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:02.508856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:02.845624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:02.845816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:02.951786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.009165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.345570  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.345806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.452275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:03.508934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:03.846184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:03.846321  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:03.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.009280  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.345995  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.346111  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.452256  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:04.509372  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:04.845664  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:04.846025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:04.952025  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.009380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.346175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.346181  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.452623  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:05.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:05.845511  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:05.845789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:05.951736  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.009300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.345807  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.346120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.452299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:06.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:06.845431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:06.845747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:06.951811  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.009905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.339106  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:07.345597  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:07.346187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.452931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:07.509578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:07.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:07.846266  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 08:31:07.899059  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.899089  387539 retry.go:31] will retry after 40.559996542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 08:31:07.952238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.009242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.345806  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.345963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.452237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:08.508727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:08.846489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:08.846533  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:08.952772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.010175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.346214  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.346399  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.452814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:09.509683  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:09.846071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:09.846175  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:09.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.009101  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.345238  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.346055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:10.509087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:10.845466  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:10.845735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:10.951734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.009376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.346018  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.346093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.452602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:11.509357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:11.845819  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:11.846106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:11.952393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.009094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.345109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.345635  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.452900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:12.509747  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:12.845711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:12.845914  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:12.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.009115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.345408  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.452396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:13.509231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:13.845494  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:13.846119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:13.952602  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.010164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.346040  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.452353  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:14.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:14.845489  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:14.845815  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:14.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.009711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.346376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.452852  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:15.509706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:15.846977  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:15.847062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:15.952541  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.009327  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.345888  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.346265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.452465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:16.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:16.845448  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:16.845961  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:16.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.010066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.345301  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.345698  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.451859  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:17.552769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:17.845897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:17.846010  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:17.951895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.345789  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.345935  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.451969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:18.509592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:18.845904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:18.846320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:18.952560  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.009221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.345672  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.346133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:19.509390  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:19.845688  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:19.845944  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:19.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.346107  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.451968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:20.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:20.845746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:20.846140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:20.952760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.345888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.345967  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.452022  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:21.510304  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:21.845633  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:21.846006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:21.952314  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.346112  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.346281  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.452380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:22.509171  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:22.845463  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:22.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:22.952321  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.008794  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.345924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.346134  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.452014  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:23.510198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:23.845423  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:23.845908  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:23.952121  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.008788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.345818  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.345880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.452709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:24.509239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:24.846079  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:24.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:24.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.346408  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.346645  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.452594  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:25.509856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:25.846416  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:25.846446  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:25.952577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.009243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.346002  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.346328  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.452568  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:26.509226  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:26.845630  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:26.845989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:26.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.009102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.344984  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.345670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.451721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:27.509670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:27.846298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:27.846328  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:27.952436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.009088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.345071  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.345514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.452990  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:28.509800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:28.845538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:28.845549  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:28.952752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.345731  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.345767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.451898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:29.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:29.845660  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:29.845743  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:29.954437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.009591  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.345694  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.345826  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.451850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:30.509114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:30.845457  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:30.845863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:30.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.008880  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.452522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:31.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:31.845340  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:31.846098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:31.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.345562  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.345751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.451752  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:32.509498  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:32.846005  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:32.846015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:32.952296  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.008916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.346067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.346085  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.452074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:33.508388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:33.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:33.846407  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:33.952505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.009198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.345603  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.345997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.452284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:34.508994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:34.845333  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:34.845899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:34.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.346173  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.346187  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.452081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:35.508670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:35.848907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:35.848908  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:35.951592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.009305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.345881  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.346217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.452391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:36.509291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:36.845641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:36.846291  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:36.952619  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.345641  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.346183  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.452340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:37.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:37.845435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:37.845657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:37.951659  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.009365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.345904  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.345948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:38.508874  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:38.846399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:38.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:38.952667  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.009535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.346057  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.346313  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.452593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:39.509172  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:39.845821  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:39.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:39.951931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.009666  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.345746  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.345756  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.451930  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:40.509717  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:40.845968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:40.846159  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:40.952302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.008813  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.345751  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.346083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.452220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:41.508800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:41.846373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:41.846428  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:41.952582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.009477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.345816  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.346146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:42.509082  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:42.845206  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:42.845593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:42.952920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.009344  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.345643  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.346032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.452584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:43.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:43.846130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:43.846227  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:43.952242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.009320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.345668  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.346165  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.452320  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:44.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:44.846497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:44.846568  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:44.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.009270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.346009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.346017  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.452179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:45.508810  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:45.846318  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:45.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:45.953200  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.053765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.345928  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.345949  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.451841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:46.509367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:46.845759  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:46.845864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:46.952208  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.009049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.346089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.346296  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.452276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:47.509276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:47.845998  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:47.846031  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:47.953092  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.008958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.348118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.348220  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.452645  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:48.459706  387539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 08:31:48.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:48.845521  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:48.846369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:48.952245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 08:31:49.009817  387539 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 08:31:49.009958  387539 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 08:31:49.346161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.346314  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.452693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:49.509721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:49.846323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:49.846403  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:49.952288  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.009479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.346165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.346262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:50.511027  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:50.846141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:50.846346  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:50.952309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.345651  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.346358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.452496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:51.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:51.845910  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:51.846102  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:51.952292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.008948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.346231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.346476  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.452572  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:52.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:52.846165  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:52.846219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:52.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.009004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.346193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.346397  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.452012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:53.510161  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:53.845342  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:53.845616  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:53.952894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.009820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.346066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.346111  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.451951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:54.509668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:54.845920  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:54.845975  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:54.952307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.008953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.346482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.346564  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.452557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:55.509198  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:55.846008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:55.846122  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:55.952273  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.009005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.345943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.345987  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.451970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:56.509693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:56.846279  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:56.846364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:56.952734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.009777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.345985  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.452169  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:57.509107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:57.845868  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:57.845918  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:57.952230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.008806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.346324  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.346362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.452386  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:58.509302  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:58.845621  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:58.846009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:58.952271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.009231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.345552  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.346005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.452425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:31:59.509368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:31:59.846005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:31:59.846038  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:31:59.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.009825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.346371  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.346435  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:00.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:00.845617  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:00.845923  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:00.952434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.009268  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.346190  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.452432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:01.509356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:01.845820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:01.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:01.952038  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.009864  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.345911  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.346056  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.452757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:02.509501  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:02.845906  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:02.846292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:02.952670  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.345785  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.346020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.452457  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:03.509461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:03.846203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:03.846249  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:03.952857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.008766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.346191  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.346205  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:04.509374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:04.845874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:04.846090  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:04.952199  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.009031  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.345858  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.345930  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.451888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:05.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:05.846482  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:05.846625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:05.952585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.009218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.345706  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.346319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.452653  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:06.509286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:06.845541  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:06.845704  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:06.951956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.009468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.345695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.345745  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.451863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:07.510159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:07.845888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:07.845901  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:07.951951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.345980  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.346046  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.452589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:08.509271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:08.846025  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:08.846034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:08.952511  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.008945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.346573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.346620  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.452981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:09.509795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:09.846346  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:09.846438  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:09.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.009110  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.345481  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.345733  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.451902  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:10.509713  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:10.846101  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:10.846139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:10.952420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.009168  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.346099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.346223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.452631  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:11.510142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:11.845960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:11.845982  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:11.951897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.010286  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.345508  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.346153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.452434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:12.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:12.845813  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:12.846236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:12.952299  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.345858  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.346006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.452117  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:13.508849  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:13.845790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:13.846007  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:13.951901  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.346064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.346065  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.452106  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:14.508883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:14.846158  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:14.846171  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:14.952374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.008914  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.346557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.346608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.452803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:15.509895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:15.846827  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:15.846861  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:15.952699  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.009411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.345859  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.346429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.452726  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:16.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:16.846572  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:16.846610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:16.952453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.345250  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.345814  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:17.508754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:17.846167  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:17.846211  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:17.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.008739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.346188  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.346255  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:18.509267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:18.846236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:18.846235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:18.952637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.009342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.345703  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.346091  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:19.509449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:19.846316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:19.846344  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:19.952405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.345264  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.346400  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.452542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:20.509262  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:20.845773  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:20.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:20.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.346194  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:21.509376  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:21.848839  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:21.849867  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:21.952936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.010023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.345625  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.346114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.452763  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:22.509711  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:22.846197  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:22.846244  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:22.952388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.009290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.345800  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.346246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.452672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:23.509534  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:23.846304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:23.846334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:23.952785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.009642  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.346072  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.346415  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.452739  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:24.509705  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:24.846107  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:24.846335  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:24.952786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.009641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.346282  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.346356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.452912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:25.509769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:25.846639  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:25.846675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:25.953086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.009130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.345739  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.346053  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.452469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:26.510429  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:26.845959  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:26.846628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:26.953298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.009036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.347053  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:27.347275  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.452777  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:27.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:27.846103  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:27.846145  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.072906  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.073113  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.346059  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:28.508950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:28.845955  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:28.846095  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:28.952404  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.009351  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.347464  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.347629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.453517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:29.553437  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:29.846126  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:29.846245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:29.952170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.008971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.345959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.346015  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:30.509418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:30.845766  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:30.846285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:30.952392  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.008956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.345931  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.346361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.452474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:31.509134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:31.845897  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:31.846021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:31.952093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.345435  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.345772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.452246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:32.509083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:32.845812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:32.845956  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:32.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.346099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.346120  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:33.452146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:33.508729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:33.846479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:33.846503  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.036243  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.036382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.345600  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.345895  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.452267  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:34.508982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:34.845610  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:34.845774  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:34.953630  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.008888  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.346785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.346853  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.451866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:35.509729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:35.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:35.846406  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:35.954174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.055655  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.346236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:36.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:36.845459  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:36.845939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:36.951953  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.009866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.346021  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.346064  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.452076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:37.509650  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:37.846276  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:37.952853  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.345624  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.346137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.452271  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:38.509005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:38.845239  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:38.845607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:38.953072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.009685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.346312  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.346343  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.452629  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:39.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:39.846245  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:39.846305  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:39.952898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.009523  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.346058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.346222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.452218  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:40.509154  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:40.845436  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:40.845959  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:40.952223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.008967  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.345362  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.345715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.451987  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:41.509593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:41.846030  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:41.846208  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:41.952460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.009083  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.345364  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.345994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.452312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:42.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:42.845412  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:42.846137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:42.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.009246  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.345531  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.345851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.451965  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:43.509607  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:43.845677  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:43.845725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:43.953242  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.008881  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.346140  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.346245  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:44.508976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:44.846058  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:44.846073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:44.952220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.008952  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.346260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.452230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:45.508958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:45.846253  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:45.846260  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:45.952496  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.009248  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.345700  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.346422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.452785  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:46.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:46.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:46.846041  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:46.951796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.009505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.345956  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.345992  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.451971  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:47.509761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:47.846237  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:47.846334  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:47.952805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.009735  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.345689  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.346306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.452750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:48.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:48.845880  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:48.846359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:48.952570  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.009297  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.345969  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.346094  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.452240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:49.509049  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:49.845855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:49.846006  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:49.952184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.008907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.345976  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.346081  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.451788  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:50.510100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:50.845304  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:50.848309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:50.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.009220  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.345805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.345874  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.451634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:51.509582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:51.845944  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:51.846447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:51.953076  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.008934  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.345804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.345877  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.452096  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:52.508656  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:52.846195  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:52.846222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:52.952603  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.009374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.345675  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.346124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.452231  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:53.846036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:53.846118  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:53.952566  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.009207  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.345383  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.345922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.452193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:54.508803  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:54.846518  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:54.846608  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:54.952787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.009360  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.346141  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.346211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.452319  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:55.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:55.846350  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:55.846419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:55.952451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.009066  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.345454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.345940  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.452221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:56.508812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:56.846088  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:56.846113  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:56.952011  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.009709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.345986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.346090  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:57.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:57.846361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:57.846431  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:57.952871  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.009495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.346447  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.346500  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.452249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:58.508841  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:58.845781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:58.845828  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:58.951889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.009775  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.346440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.346485  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.452552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:32:59.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:32:59.845729  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:32:59.845869  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:32:59.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.008817  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.346461  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.346526  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.455517  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:00.508985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:00.845761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:00.845875  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:00.952068  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.009767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.346151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.346291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.452530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:01.553772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:01.845974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:01.846019  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:01.951993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.010114  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.345293  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.345801  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.451761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:02.509345  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:02.845976  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:02.846143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:02.952766  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.009431  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.345682  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.346257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.453746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:03.509942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:03.846258  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:03.846309  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:03.952266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.009753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.346015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.346114  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.452202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:04.509708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:04.846315  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:04.846361  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:04.952432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.009137  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.345758  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.345912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 08:33:05.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:05.552401  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:05.846099  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:05.846460  387539 kapi.go:107] duration metric: took 2m53.003293209s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 08:33:05.954425  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.011134  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.346506  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:06.509064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:06.845958  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:06.952356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.009108  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.345705  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.453032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:07.510592  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:07.846109  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:07.954081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.053417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.351454  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.453361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:08.509493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:08.846396  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:08.953209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.013355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.346185  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.452954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:09.509941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:09.846594  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:09.953166  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.011098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.345673  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.452685  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:10.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:10.846291  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:10.952757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.010232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.345715  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.452872  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:11.509757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:11.845940  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:11.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.009576  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.476146  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.476164  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:12.508903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:12.846546  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:12.952547  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.009054  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.345224  387539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 08:33:13.452440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:13.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:13.845854  387539 kapi.go:107] duration metric: took 3m1.003676867s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 08:33:13.953193  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.452414  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:14.509249  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:14.953043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.009571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:15.509029  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:15.952456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.008996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.452993  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:16.509565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:16.951754  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.010077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.452637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:17.509767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:17.951958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.009558  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.452610  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:18.509383  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:18.953289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.452727  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:19.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:19.952537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.054307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.453283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:20.508941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:20.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.009232  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:21.509772  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:21.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.452743  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:22.509695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:22.952306  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.452565  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:23.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:23.952897  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.009648  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:24.508741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:24.952701  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.009545  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.452359  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:25.552870  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:25.952571  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.009264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:26.509263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:26.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.009531  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.452141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:27.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:27.952219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.008825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.452943  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:28.509596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:28.951821  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.009481  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:29.509195  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:29.953059  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.053354  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.452999  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:30.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:30.951979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.009797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.453388  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:31.508724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:31.952067  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.009597  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.452510  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:32.509504  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:32.952078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.009757  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.451725  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:33.509601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:33.952055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.009994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.452436  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:34.509072  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:34.952958  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.009293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.453339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:35.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:35.952370  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.009056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.453293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:36.508838  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:36.953074  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.013450  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.452649  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:37.509512  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:37.952032  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.009978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.452885  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:38.509308  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:38.952931  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.009434  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.452323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:39.509150  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:39.953222  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.009006  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.452790  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:40.509538  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:40.951932  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.009432  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.455147  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:41.508750  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:41.952251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.009149  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.453440  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:42.509240  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:42.952824  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.009671  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.451894  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:43.509637  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:43.951679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.009272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.452122  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:44.509896  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:44.952875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.009456  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.452086  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:45.509855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:45.952037  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.009503  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.452605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:46.509412  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:46.951948  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.009749  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:47.508624  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:47.952176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.008729  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.452489  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:48.509007  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:48.952454  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.452929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:49.509326  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:49.953179  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.009573  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.452080  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:50.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:50.952316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.008983  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.452008  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:51.509589  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:51.952373  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.452203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:52.509141  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:52.952449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.009163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.452673  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:53.509389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:53.952399  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.008968  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.452357  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:54.509312  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:54.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.008903  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.452740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:55.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:55.952133  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.008515  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:56.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:56.952684  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.009269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.452860  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:57.509842  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:57.952800  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.009471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.452132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:58.508760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:58.952191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.008875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.452781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:33:59.509355  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:33:59.953587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.054438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.452155  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:00.508625  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:00.952742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.009015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.452064  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:01.508595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:01.952010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.010061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.452878  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:02.509741  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:02.952175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.008974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.452307  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:03.508972  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:03.952590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.009251  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.452989  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:04.509709  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:04.952475  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.009023  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.453033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:05.509562  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:05.952194  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.008939  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.453017  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:06.509675  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:06.952060  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.010460  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.451978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:07.509900  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:07.952073  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.008912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.452986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:08.509922  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:08.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.009396  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.452015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:09.508696  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:09.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.053986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.453071  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:10.508707  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:10.952459  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.009139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.452040  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:11.509938  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:11.952708  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.009636  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.452462  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:12.509411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:12.951905  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.009391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.452055  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:13.509716  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:13.952153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.009034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.452857  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:14.509634  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:14.952411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.009151  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.453043  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:15.508787  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:15.951746  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.009679  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.452755  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:16.509577  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:16.951855  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.009721  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.452270  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:17.509070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:17.952417  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.009119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.452899  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:18.509945  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:18.952285  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.008973  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.452420  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:19.509163  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:19.952703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.009419  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.452368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:20.509153  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:20.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.009176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.451907  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:21.509703  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:21.952486  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.009310  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.453128  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:22.509247  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:22.952807  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.452479  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:23.509358  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:23.951882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.009724  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.452421  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:24.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:24.952303  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.052740  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.452786  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:25.509524  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:25.952084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.009393  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.452606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:26.509227  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:26.952919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.009449  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:27.509272  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:27.953056  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.008665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.452311  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:28.509135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:28.952950  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.009732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.452806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:29.509663  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:29.951992  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.009677  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.454926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:30.556176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:30.952552  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.009135  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.452491  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:31.509187  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:31.952765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.010044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.453284  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:32.509124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:32.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.009047  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.452601  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:33.509427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:33.952099  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.008641  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.452715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:34.509202  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:34.952690  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.009533  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.452468  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:35.509120  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:35.952652  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.009453  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.452283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:36.509034  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:36.952982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.010277  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.452898  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:37.509951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:37.952333  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.009152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.452796  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:38.509514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:38.951891  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.009341  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.452769  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:39.509365  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:39.952087  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.009812  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:40.508954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:40.953223  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.009045  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.452098  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:41.508795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:41.952125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.452644  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:42.509926  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:42.952124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.009805  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.452339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:43.509062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:43.952706  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.009289  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.453174  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:44.553316  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:44.952985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.009340  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.453131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:45.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:45.951783  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.009764  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.452224  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:46.509221  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:46.952799  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.009661  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.451963  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:47.509771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:47.951981  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.009474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.451982  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:48.510046  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:48.952776  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.009347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.451710  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:49.509422  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:49.952334  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.009230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.452851  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:50.509879  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:50.952761  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.009609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.453093  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:51.508618  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:51.952367  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.009335  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.451828  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:52.509765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:52.952131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.008768  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.452125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:53.508617  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:53.951915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.009924  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.452347  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:54.509044  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:54.953033  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.008575  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.452382  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:55.509020  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:55.952587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.009883  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.452266  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:56.508609  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:56.952427  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.008882  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.451996  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:57.509798  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:57.952349  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.008994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.452078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:58.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:58.953244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.008791  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.452820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:34:59.509438  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:34:59.952276  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.009095  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.454329  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:00.508526  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:00.951927  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.009514  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.452361  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:01.509176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:01.953124  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.008742  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.452318  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:02.509292  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:02.952978  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.008626  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.451991  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:03.509530  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:03.952094  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.008765  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.452089  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:04.509584  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:04.952535  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.009257  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.452850  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:05.509391  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:05.951665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.010070  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.452234  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:06.508751  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:06.952557  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.452356  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:07.509081  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:07.952954  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.009418  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.451578  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:08.509069  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:08.952979  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.009394  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.451672  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:09.509300  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:09.953084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.008804  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.452100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:10.508590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:10.952186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.008919  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.451692  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:11.509380  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:11.952159  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.008936  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.452290  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:12.509522  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:12.952657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.009294  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.452687  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:13.509734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:13.952004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.009665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.452477  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:14.509219  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:14.953317  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.053305  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.452957  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:15.509406  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:15.951753  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.010494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.451613  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:16.509469  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:16.951916  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.009368  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.451621  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:17.509537  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:17.951986  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.009697  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.452332  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:18.509309  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:18.953131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.008745  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.452118  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:19.508915  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:19.952506  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.009283  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.452596  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:20.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:20.953170  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.008925  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.453125  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:21.508686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:21.952130  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.009048  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.452863  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:22.509403  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:22.952211  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.009143  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.452579  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:23.509144  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:23.952593  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.009236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.452668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:24.509287  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:24.953152  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.008951  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.451960  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:25.509494  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:25.951797  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.009781  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.452176  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:26.508962  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:26.952918  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.010145  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.452488  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:27.509471  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:27.951970  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.009582  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.451912  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:28.508700  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:28.952497  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.009156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.453230  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:29.509119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:29.952889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.009476  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.454455  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:30.509009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:30.953474  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.009465  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.452010  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:31.509605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:31.951929  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.009559  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.452293  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:32.508723  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:32.952263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.053411  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.452665  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:33.509254  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:33.953146  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.008802  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.451806  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:34.509590  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:34.952410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.053369  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:35.509264  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:35.952818  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.009233  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.451994  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:36.509760  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:36.952529  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.009364  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.452180  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:37.509156  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:37.952662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.009587  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.451744  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:38.509487  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:38.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.008678  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.451795  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:39.509551  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:39.952298  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.009131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.452628  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:40.509567  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:40.952018  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.008605  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.452331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:41.509196  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:41.953269  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.009042  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.452866  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:42.509473  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:42.952009  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.053084  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.452446  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:43.509189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:43.952595  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.009451  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.452191  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:44.508730  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:44.952389  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.009061  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.452680  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:45.509241  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:45.952532  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.009493  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.452238  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:46.509131  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:46.952695  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.009405  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.452184  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:47.509012  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:47.952350  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.009078  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.452686  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:48.509295  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:48.953015  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.008664  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.452062  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:49.508632  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:49.952395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.008941  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.451875  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:50.509433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:50.952771  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.009472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.452374  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:51.509331  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:51.953175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.009259  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.453005  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:52.509759  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:52.952445  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.008890  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.452239  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:53.508767  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:53.952339  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.009100  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.452889  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:54.509472  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:54.952540  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.053004  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.452816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:55.509585  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:55.951856  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.009542  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.452139  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:56.508997  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:56.952820  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.009668  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.452051  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:57.508606  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:57.952019  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.008662  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.451816  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:58.509495  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:58.953217  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.008712  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.452395  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:35:59.508913  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:35:59.952323  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.008657  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.451985  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:00.509265  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:00.953263  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.008734  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.452478  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:01.509077  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:01.952688  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.009433  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.452119  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:02.508942  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:02.952693  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.009377  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.452681  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:03.509209  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:03.952342  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.009052  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.452762  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:04.509115  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:04.953186  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.010178  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.452732  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:05.509505  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 08:36:05.951715  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.009812  387539 kapi.go:107] duration metric: took 5m46.503976887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 08:36:06.011826  387539 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-051783 cluster.
	I0929 08:36:06.013337  387539 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 08:36:06.014809  387539 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 08:36:06.452825  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:06.952244  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.452410  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:07.952142  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.452175  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:08.952189  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.451974  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:09.953036  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.452917  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:10.953235  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.451608  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:11.952203  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.452236  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:12.952132  387539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 08:36:13.449535  387539 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0929 08:36:13.449570  387539 kapi.go:107] duration metric: took 6m0.00092228s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0929 08:36:13.449699  387539 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0929 08:36:13.451535  387539 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth
	I0929 08:36:13.453038  387539 addons.go:514] duration metric: took 6m2.320628972s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth]
	I0929 08:36:13.453089  387539 start.go:246] waiting for cluster config update ...
	I0929 08:36:13.453117  387539 start.go:255] writing updated cluster config ...
	I0929 08:36:13.453476  387539 ssh_runner.go:195] Run: rm -f paused
	I0929 08:36:13.457677  387539 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:13.461120  387539 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.465176  387539 pod_ready.go:94] pod "coredns-66bc5c9577-n8bx8" is "Ready"
	I0929 08:36:13.465203  387539 pod_ready.go:86] duration metric: took 4.058605ms for pod "coredns-66bc5c9577-n8bx8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.467075  387539 pod_ready.go:83] waiting for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.470714  387539 pod_ready.go:94] pod "etcd-addons-051783" is "Ready"
	I0929 08:36:13.470733  387539 pod_ready.go:86] duration metric: took 3.636114ms for pod "etcd-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.472521  387539 pod_ready.go:83] waiting for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.476217  387539 pod_ready.go:94] pod "kube-apiserver-addons-051783" is "Ready"
	I0929 08:36:13.476238  387539 pod_ready.go:86] duration metric: took 3.697266ms for pod "kube-apiserver-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.478025  387539 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:13.862501  387539 pod_ready.go:94] pod "kube-controller-manager-addons-051783" is "Ready"
	I0929 08:36:13.862531  387539 pod_ready.go:86] duration metric: took 384.48807ms for pod "kube-controller-manager-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.061450  387539 pod_ready.go:83] waiting for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.461226  387539 pod_ready.go:94] pod "kube-proxy-wbl7p" is "Ready"
	I0929 08:36:14.461255  387539 pod_ready.go:86] duration metric: took 399.774957ms for pod "kube-proxy-wbl7p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:14.661898  387539 pod_ready.go:83] waiting for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061371  387539 pod_ready.go:94] pod "kube-scheduler-addons-051783" is "Ready"
	I0929 08:36:15.061418  387539 pod_ready.go:86] duration metric: took 399.4933ms for pod "kube-scheduler-addons-051783" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:36:15.061435  387539 pod_ready.go:40] duration metric: took 1.603719933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:36:15.109384  387539 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:36:15.111939  387539 out.go:179] * Done! kubectl is now configured to use "addons-051783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:41:58 addons-051783 crio[938]: time="2025-09-29 08:41:58.413279288Z" level=info msg="Trying to access \"docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\""
	Sep 29 08:41:58 addons-051783 crio[938]: time="2025-09-29 08:41:58.959251685Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=90952199-6cbd-4aa6-af85-c8c85f155a73 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:41:58 addons-051783 crio[938]: time="2025-09-29 08:41:58.959512852Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=90952199-6cbd-4aa6-af85-c8c85f155a73 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:06 addons-051783 crio[938]: time="2025-09-29 08:42:06.096893797Z" level=info msg="Stopping pod sandbox: 6b5028c3929cf4165bee897115810112cb5f5fa98f8d8d8e1e20ab2875d3875e" id=62522336-d089-45f1-9036-1d033020707a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:42:06 addons-051783 crio[938]: time="2025-09-29 08:42:06.096932857Z" level=info msg="Stopped pod sandbox (already stopped): 6b5028c3929cf4165bee897115810112cb5f5fa98f8d8d8e1e20ab2875d3875e" id=62522336-d089-45f1-9036-1d033020707a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 08:42:06 addons-051783 crio[938]: time="2025-09-29 08:42:06.097262087Z" level=info msg="Removing pod sandbox: 6b5028c3929cf4165bee897115810112cb5f5fa98f8d8d8e1e20ab2875d3875e" id=d7d18357-80b4-401b-921a-2911ae403c80 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:42:06 addons-051783 crio[938]: time="2025-09-29 08:42:06.106001498Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:42:06 addons-051783 crio[938]: time="2025-09-29 08:42:06.106041178Z" level=info msg="Removed pod sandbox: 6b5028c3929cf4165bee897115810112cb5f5fa98f8d8d8e1e20ab2875d3875e" id=d7d18357-80b4-401b-921a-2911ae403c80 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 08:42:07 addons-051783 crio[938]: time="2025-09-29 08:42:07.958402859Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=99440375-d0b8-4419-ae78-4f03c4514c7f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:07 addons-051783 crio[938]: time="2025-09-29 08:42:07.958673774Z" level=info msg="Image docker.io/nginx:alpine not found" id=99440375-d0b8-4419-ae78-4f03c4514c7f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:11 addons-051783 crio[938]: time="2025-09-29 08:42:11.958860323Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=aaca4c19-71cb-4897-ab67-fbddea5b71fb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:11 addons-051783 crio[938]: time="2025-09-29 08:42:11.959180420Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=aaca4c19-71cb-4897-ab67-fbddea5b71fb name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:12 addons-051783 crio[938]: time="2025-09-29 08:42:12.999304799Z" level=info msg="Trying to access \"docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\""
	Sep 29 08:42:25 addons-051783 crio[938]: time="2025-09-29 08:42:25.960305132Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=16c8248d-6192-4765-b598-786fbc641e77 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:25 addons-051783 crio[938]: time="2025-09-29 08:42:25.960625416Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=16c8248d-6192-4765-b598-786fbc641e77 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:36 addons-051783 crio[938]: time="2025-09-29 08:42:36.958614891Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=e959d11d-76b5-4562-ac97-a6641a86f4b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:36 addons-051783 crio[938]: time="2025-09-29 08:42:36.958969084Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=e959d11d-76b5-4562-ac97-a6641a86f4b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:43 addons-051783 crio[938]: time="2025-09-29 08:42:43.656877612Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ff092da5-b6db-4fa1-9319-86e228da7fb8 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:42:43 addons-051783 crio[938]: time="2025-09-29 08:42:43.660798784Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 29 08:42:49 addons-051783 crio[938]: time="2025-09-29 08:42:49.958827531Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=f25645c2-894e-4479-aee3-d4602fc4ad51 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:49 addons-051783 crio[938]: time="2025-09-29 08:42:49.959183518Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=f25645c2-894e-4479-aee3-d4602fc4ad51 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:54 addons-051783 crio[938]: time="2025-09-29 08:42:54.958540010Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=75cb3d02-d044-426a-a976-e1a0d5234591 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:42:54 addons-051783 crio[938]: time="2025-09-29 08:42:54.958912917Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=75cb3d02-d044-426a-a976-e1a0d5234591 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:03 addons-051783 crio[938]: time="2025-09-29 08:43:03.959142628Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=151882c3-e85a-4229-8b67-c438860e9de1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:43:03 addons-051783 crio[938]: time="2025-09-29 08:43:03.959450617Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=151882c3-e85a-4229-8b67-c438860e9de1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	15470dfdbc373       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	27b09cd861214       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   0a15333993f59       csi-hostpathplugin-59n9q
	f91efb30edf5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   b37a2c191a161       busybox
	b891eff935e5b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   0a15333993f59       csi-hostpathplugin-59n9q
	1b49b8a0c49b0       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   0a15333993f59       csi-hostpathplugin-59n9q
	78cd30ad0ac78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                8 minutes ago       Running             node-driver-registrar                    0                   0a15333993f59       csi-hostpathplugin-59n9q
	80836b6027c82       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   3f400eb1db037       ingress-nginx-controller-9cc49f96f-qxqnk
	fa2f9b0c2f698       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            10 minutes ago      Running             gadget                                   0                   2b559b62ddeb7       gadget-p475s
	45863f8b96f32       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   f6de9f678281f       snapshot-controller-7d9fbc56b8-xpkwb
	958aa9722d317       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   0a15333993f59       csi-hostpathplugin-59n9q
	727b1119f42fa       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   942be1f7fe3d6       csi-hostpath-attacher-0
	7cd9c383cc30b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   10 minutes ago      Exited              patch                                    0                   748502b4be4ae       ingress-nginx-admission-patch-scvfj
	a07e229bf44a3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   6d94b7786d291       snapshot-controller-7d9fbc56b8-n65gp
	964faa56de026       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   e4387328f31ab       csi-hostpath-resizer-0
	739db184c3579       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             11 minutes ago      Running             local-path-provisioner                   0                   7bd7dc81e5ff1       local-path-provisioner-648f6765c9-mzt6q
	64ec0688b1d33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   11 minutes ago      Exited              create                                   0                   544ece1299156       ingress-nginx-admission-create-rbxvf
	ec2908a8acb76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             12 minutes ago      Running             coredns                                  0                   8e80666def432       coredns-66bc5c9577-n8bx8
	48e51a6b3842e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             12 minutes ago      Running             storage-provisioner                      0                   b3063249d1902       storage-provisioner
	e6e25b7f19aec       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             12 minutes ago      Running             kindnet-cni                              0                   ea7b34d68514f       kindnet-47v7m
	a04df67a3379a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             12 minutes ago      Running             kube-proxy                               0                   9dbf0742f683c       kube-proxy-wbl7p
	3d5bc8bd7f0ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             13 minutes ago      Running             etcd                                     0                   240e67822abd8       etcd-addons-051783
	2e4ff50d0ab7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             13 minutes ago      Running             kube-apiserver                           0                   7d31b1c07e6fc       kube-apiserver-addons-051783
	6d75e80cafef2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             13 minutes ago      Running             kube-controller-manager                  0                   0e144a50e60a7       kube-controller-manager-addons-051783
	33ea9996cc1d3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             13 minutes ago      Running             kube-scheduler                           0                   eee48e5387175       kube-scheduler-addons-051783
	
	
	==> coredns [ec2908a8acb7634faddb0add70c1cdc6e4b2ec0e64082e83c00bcc1f5187825c] <==
	[INFO] 10.244.0.22:53146 - 52855 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135376s
	[INFO] 10.244.0.22:44463 - 13157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003407125s
	[INFO] 10.244.0.22:42741 - 2598 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005880456s
	[INFO] 10.244.0.22:43358 - 65412 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005081069s
	[INFO] 10.244.0.22:56808 - 9814 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005221504s
	[INFO] 10.244.0.22:57222 - 14161 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005164648s
	[INFO] 10.244.0.22:51834 - 10942 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006548594s
	[INFO] 10.244.0.22:37769 - 48093 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004505471s
	[INFO] 10.244.0.22:41744 - 45710 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007413415s
	[INFO] 10.244.0.22:56260 - 25719 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002697955s
	[INFO] 10.244.0.22:35710 - 58420 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003322975s
	[INFO] 10.244.0.26:59060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000230685s
	[INFO] 10.244.0.26:45421 - 3 "AAAA IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000136278s
	[INFO] 10.244.0.26:44591 - 4 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116365s
	[INFO] 10.244.0.26:57553 - 5 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117524s
	[INFO] 10.244.0.26:49960 - 6 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003803543s
	[INFO] 10.244.0.26:37529 - 7 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.004482599s
	[INFO] 10.244.0.26:51766 - 8 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.147452363s
	[INFO] 10.244.0.26:46339 - 9 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NXDOMAIN qr,aa,rd 149 0.000143392s
	[INFO] 10.244.0.26:35817 - 10 "A IN registry.kube-system.svc.cluster.local.default.svc.cluster.local. udp 82 false 512" NXDOMAIN qr,aa,rd 175 0.000114781s
	[INFO] 10.244.0.26:57333 - 11 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128127s
	[INFO] 10.244.0.26:33589 - 12 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009747s
	[INFO] 10.244.0.26:38381 - 13 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003185786s
	[INFO] 10.244.0.26:42582 - 14 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 204 0.005148102s
	[INFO] 10.244.0.26:42532 - 15 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.int. udp 75 false 512" NXDOMAIN qr,rd,ra 148 0.130600393s
	
	
	==> describe nodes <==
	Name:               addons-051783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-051783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=addons-051783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_30_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-051783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-051783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-051783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:38:37 +0000   Mon, 29 Sep 2025 08:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-051783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 83273b57f406470abdf516e252de2f52
	  System UUID:                ec5529e1-1ad9-400f-8294-1adf6616ba82
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  gadget                      gadget-p475s                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qxqnk                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-xvf9b                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-n8bx8                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-59n9q                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-051783                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-47v7m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-051783                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-051783                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wbl7p                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-051783                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-n65gp                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-xpkwb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-mzt6q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-051783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-051783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-051783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node addons-051783 event: Registered Node addons-051783 in Controller
	  Normal  NodeReady                12m   kubelet          Node addons-051783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [3d5bc8bd7f0ffa9831231e2ccd173ca20be89d6dcc1ee1ad3b14f8dd9571bb86] <==
	{"level":"warn","ts":"2025-09-29T08:30:02.997494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.003681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.011615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.018242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.030088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.033604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.039960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.046371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:03.100824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.793114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:13.799945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.542994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.549599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.569139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:30:40.575527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:32:28.071330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.763336ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071530Z","caller":"traceutil/trace.go:172","msg":"trace[30119979] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1117; }","duration":"161.980989ms","start":"2025-09-29T08:32:27.909530Z","end":"2025-09-29T08:32:28.071511Z","steps":["trace[30119979] 'range keys from in-memory index tree'  (duration: 161.701686ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T08:32:28.071329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.131454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:32:28.071650Z","caller":"traceutil/trace.go:172","msg":"trace[1183857226] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"120.458435ms","start":"2025-09-29T08:32:27.951174Z","end":"2025-09-29T08:32:28.071633Z","steps":["trace[1183857226] 'range keys from in-memory index tree'  (duration: 120.052644ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:33:12.239457Z","caller":"traceutil/trace.go:172","msg":"trace[155675200] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"129.084223ms","start":"2025-09-29T08:33:12.110348Z","end":"2025-09-29T08:33:12.239432Z","steps":["trace[155675200] 'process raft request'  (duration: 69.579624ms)","trace[155675200] 'compare'  (duration: 59.405727ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T08:33:12.474373Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.785446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T08:33:12.474452Z","caller":"traceutil/trace.go:172","msg":"trace[1612262900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1258; }","duration":"129.87677ms","start":"2025-09-29T08:33:12.344560Z","end":"2025-09-29T08:33:12.474437Z","steps":["trace[1612262900] 'range keys from in-memory index tree'  (duration: 129.713966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T08:40:02.621144Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2025-09-29T08:40:02.644347Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1444,"took":"22.608235ms","hash":1501025519,"current-db-size-bytes":6053888,"current-db-size":"6.1 MB","current-db-size-in-use-bytes":3846144,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2025-09-29T08:40:02.644399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1501025519,"revision":1444,"compact-revision":-1}
	
	
	==> kernel <==
	 08:43:08 up  2:25,  0 users,  load average: 0.20, 0.27, 0.62
	Linux addons-051783 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e6e25b7f19aec7f99b8219bbbaa88084f2510369dbfa360e267a083261d1c336] <==
	I0929 08:41:02.477966       1 main.go:301] handling current node
	I0929 08:41:12.475930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:41:12.475966       1 main.go:301] handling current node
	I0929 08:41:22.478928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:41:22.478997       1 main.go:301] handling current node
	I0929 08:41:32.482915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:41:32.482955       1 main.go:301] handling current node
	I0929 08:41:42.475975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:41:42.476005       1 main.go:301] handling current node
	I0929 08:41:52.478944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:41:52.478986       1 main.go:301] handling current node
	I0929 08:42:02.481116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:02.481151       1 main.go:301] handling current node
	I0929 08:42:12.476057       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:12.476091       1 main.go:301] handling current node
	I0929 08:42:22.476878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:22.477267       1 main.go:301] handling current node
	I0929 08:42:32.479923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:32.479954       1 main.go:301] handling current node
	I0929 08:42:42.475935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:42.475981       1 main.go:301] handling current node
	I0929 08:42:52.482940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:42:52.482978       1 main.go:301] handling current node
	I0929 08:43:02.482239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:43:02.482281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e4ff50d0ab7df575a409e71f6c86b1e3bd4b8f41db0427eb9d65cbbef08b9a3] <==
	W0929 08:30:52.660152       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.660293       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.661168       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.661206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.680870       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.680901       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:52.682064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.216.72:443: connect: connection refused
	E0929 08:30:52.682170       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.216.72:443: connect: connection refused" logger="UnhandledError"
	W0929 08:30:59.130480       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 08:30:59.130524       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.130558       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 08:30:59.130912       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.135946       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	E0929 08:30:59.157237       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.200.83:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.200.83:443: connect: connection refused" logger="UnhandledError"
	I0929 08:30:59.225977       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 08:36:44.813354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47410: use of closed network connection
	E0929 08:36:44.997114       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47438: use of closed network connection
	I0929 08:36:54.051263       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.58.104"}
	I0929 08:37:00.154224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 08:37:00.239132       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 08:37:00.408198       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.245.4"}
	I0929 08:40:03.495564       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6d75e80cafef289bcb0634728686530f7d177ec79248071405ed0223eda388c2] <==
	E0929 08:30:40.536876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 08:30:40.537102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 08:30:40.537173       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 08:30:40.560116       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 08:30:40.563366       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 08:30:40.638265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:30:40.663861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 08:30:55.534409       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0929 08:36:58.265328       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 08:37:30.688902       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0929 08:40:11.149027       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.166100       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.188892       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.222082       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.275741       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.368102       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.541260       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:11.874921       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:12.528852       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:13.822456       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:16.394998       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:21.527876       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:31.780772       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 08:40:52.275224       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I0929 08:41:38.386135       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [a04df67a3379aa412e270c65b38675702f42ba0dc9e5c07b8052fb9a090d6471] <==
	I0929 08:30:12.128941       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:30:12.417641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:30:12.520178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:30:12.520269       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:30:12.522477       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:30:12.570590       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:30:12.570755       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:30:12.583981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:30:12.584563       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:30:12.584628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:30:12.586703       1 config.go:200] "Starting service config controller"
	I0929 08:30:12.586768       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:30:12.586873       1 config.go:309] "Starting node config controller"
	I0929 08:30:12.586913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:30:12.586938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:30:12.587504       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:30:12.587567       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:30:12.587568       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:30:12.587628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:30:12.687916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:30:12.688043       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:30:12.688062       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [33ea9996cc1d356857ab17f8e8157021f2b58227ecdb78065f0395986fc73f7b] <==
	E0929 08:30:03.522570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:03.522679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:03.522790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:03.522954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:30:03.522963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:03.522973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:03.523052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:30:03.523168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 08:30:03.523181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:03.523198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:03.523218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 08:30:03.523269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:30:03.523304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 08:30:03.523373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:03.523781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:30:04.391474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 08:30:04.430593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:30:04.474872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:30:04.497934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:30:04.640977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:30:04.655178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 08:30:04.765484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:30:04.784825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:30:04.965095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 08:30:06.819658       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:42:10 addons-051783 kubelet[1568]: E0929 08:42:10.957708    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c75569f9-aafe-41b4-9ffa-4e10d9573809"
	Sep 29 08:42:11 addons-051783 kubelet[1568]: E0929 08:42:11.959490    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:42:16 addons-051783 kubelet[1568]: E0929 08:42:16.106496    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135336106247845  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:16 addons-051783 kubelet[1568]: E0929 08:42:16.106543    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135336106247845  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:25 addons-051783 kubelet[1568]: E0929 08:42:25.960966    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:42:26 addons-051783 kubelet[1568]: E0929 08:42:26.108823    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135346108579902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:26 addons-051783 kubelet[1568]: E0929 08:42:26.108874    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135346108579902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:36 addons-051783 kubelet[1568]: E0929 08:42:36.110684    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135356110461854  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:36 addons-051783 kubelet[1568]: E0929 08:42:36.110717    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135356110461854  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:36 addons-051783 kubelet[1568]: E0929 08:42:36.959332    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:42:40 addons-051783 kubelet[1568]: I0929 08:42:40.957795    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:42:43 addons-051783 kubelet[1568]: E0929 08:42:43.656376    1568 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 29 08:42:43 addons-051783 kubelet[1568]: E0929 08:42:43.656438    1568 kuberuntime_image.go:43] "Failed to pull image" err="initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 29 08:42:43 addons-051783 kubelet[1568]: E0929 08:42:43.656669    1568 kuberuntime_manager.go:1449] "Unhandled Error" err="container amd-gpu-device-plugin start failed in pod amd-gpu-device-plugin-xvf9b_kube-system(af4f61bf-c919-44d4-8d12-3579f9dce9c6): ErrImagePull: initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:42:43 addons-051783 kubelet[1568]: E0929 08:42:43.656742    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ErrImagePull: \"initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:42:46 addons-051783 kubelet[1568]: E0929 08:42:46.112618    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135366112353688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:46 addons-051783 kubelet[1568]: E0929 08:42:46.112654    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135366112353688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:49 addons-051783 kubelet[1568]: E0929 08:42:49.959492    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:42:54 addons-051783 kubelet[1568]: I0929 08:42:54.957905    1568 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xvf9b" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 08:42:54 addons-051783 kubelet[1568]: E0929 08:42:54.959311    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: initializing source docker://rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-xvf9b" podUID="af4f61bf-c919-44d4-8d12-3579f9dce9c6"
	Sep 29 08:42:56 addons-051783 kubelet[1568]: E0929 08:42:56.114613    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135376114350914  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:42:56 addons-051783 kubelet[1568]: E0929 08:42:56.114657    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135376114350914  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:03 addons-051783 kubelet[1568]: E0929 08:43:03.959746    1568 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec159452-503b-4642-b822-ea6cdac8e16e"
	Sep 29 08:43:06 addons-051783 kubelet[1568]: E0929 08:43:06.116946    1568 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759135386116685429  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	Sep 29 08:43:06 addons-051783 kubelet[1568]: E0929 08:43:06.116981    1568 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759135386116685429  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:470811}  inodes_used:{value:189}}"
	
	
	==> storage-provisioner [48e51a6b3842e2e63335e82d65f22a4db94233392a881d6d3ff86158809cd5ed] <==
	W0929 08:42:44.366315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:46.369687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:46.373697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:48.377381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:48.382950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:50.386176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:50.390122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:52.393712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:52.397800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:54.401107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:54.404992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:56.408617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:56.412598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:58.416224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:42:58.419989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:00.423023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:00.427993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:02.430775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:02.443306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:04.446423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:04.450635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:06.454571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:06.459983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:08.463640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:43:08.467795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-051783 -n addons-051783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-051783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1 (85.159653ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:37:00 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrnn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrnn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/nginx to addons-051783
	  Warning  Failed     4m54s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x3 over 4m54s)  kubelet            Error: ErrImagePull
	  Warning  Failed     102s (x2 over 3m51s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    74s (x4 over 4m54s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     74s (x4 over 4m54s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    62s (x4 over 6m9s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-051783/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:38:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z2l94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-z2l94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m42s                default-scheduler  Successfully assigned default/task-pv-pod to addons-051783
	  Warning  Failed     71s (x2 over 3m20s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     71s (x2 over 3m20s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    59s (x2 over 3m20s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s (x2 over 3m20s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    44s (x3 over 4m41s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdgkp (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zdgkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rbxvf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-scvfj" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-xvf9b" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-051783 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rbxvf ingress-nginx-admission-patch-scvfj amd-gpu-device-plugin-xvf9b kube-ingress-dns-minikube helper-pod-create-pvc-1898a7cf-9892-401f-88a2-cc5562baebb3: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (363.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-580781 --alsologtostderr -v=1]
E0929 08:56:43.402029  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-580781 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-580781 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-580781 --alsologtostderr -v=1] stderr:
I0929 08:56:31.235690  444734 out.go:360] Setting OutFile to fd 1 ...
I0929 08:56:31.235993  444734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 08:56:31.236004  444734 out.go:374] Setting ErrFile to fd 2...
I0929 08:56:31.236009  444734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 08:56:31.236204  444734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 08:56:31.236503  444734 mustload.go:65] Loading cluster: functional-580781
I0929 08:56:31.236865  444734 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 08:56:31.237219  444734 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 08:56:31.254956  444734 host.go:66] Checking if "functional-580781" exists ...
I0929 08:56:31.255313  444734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 08:56:31.309686  444734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:31.299508313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 08:56:31.309809  444734 api_server.go:166] Checking apiserver status ...
I0929 08:56:31.309880  444734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 08:56:31.309919  444734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 08:56:31.327614  444734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 08:56:31.428955  444734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5530/cgroup
W0929 08:56:31.439069  444734 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5530/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 08:56:31.439139  444734 ssh_runner.go:195] Run: ls
I0929 08:56:31.442784  444734 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 08:56:31.446988  444734 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 08:56:31.447027  444734 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 08:56:31.447168  444734 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 08:56:31.447183  444734 addons.go:69] Setting dashboard=true in profile "functional-580781"
I0929 08:56:31.447189  444734 addons.go:238] Setting addon dashboard=true in "functional-580781"
I0929 08:56:31.447218  444734 host.go:66] Checking if "functional-580781" exists ...
I0929 08:56:31.447518  444734 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 08:56:31.466894  444734 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 08:56:31.468124  444734 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 08:56:31.469240  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 08:56:31.469262  444734 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 08:56:31.469342  444734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 08:56:31.487547  444734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 08:56:31.594576  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 08:56:31.594601  444734 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 08:56:31.613294  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 08:56:31.613317  444734 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 08:56:31.631946  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 08:56:31.631979  444734 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 08:56:31.650999  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 08:56:31.651023  444734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 08:56:31.669561  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 08:56:31.669589  444734 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 08:56:31.687889  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 08:56:31.687914  444734 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 08:56:31.706364  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 08:56:31.706392  444734 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 08:56:31.725120  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 08:56:31.725149  444734 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 08:56:31.743862  444734 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 08:56:31.743885  444734 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 08:56:31.763207  444734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 08:56:32.209288  444734 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-580781 addons enable metrics-server

                                                
                                                
I0929 08:56:32.210953  444734 addons.go:201] Writing out "functional-580781" config to set dashboard=true...
W0929 08:56:32.211257  444734 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 08:56:32.211999  444734 kapi.go:59] client config for functional-580781: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt", KeyFile:"/home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.key", CAFile:"/home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 08:56:32.212565  444734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 08:56:32.212586  444734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 08:56:32.212593  444734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 08:56:32.212599  444734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 08:56:32.212614  444734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 08:56:32.220158  444734 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  73a57882-52f7-4704-996f-e07546cf3b3f 1175 0 2025-09-29 08:56:32 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 08:56:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.247.251,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.247.251],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 08:56:32.220300  444734 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 08:56:32.220357  444734 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-580781 proxy --port 36195]
I0929 08:56:32.220605  444734 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 08:56:32.264193  444734 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 08:56:32.264266  444734 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 08:56:32.272176  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07e9729a-406e-49b3-979b-394fadd9b770] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc0008d4580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002923c0 TLS:<nil>}
I0929 08:56:32.272272  444734 retry.go:31] will retry after 86.898µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.275754  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93883caa-2106-4e2e-9230-9dc1287f9afa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000728c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6000 TLS:<nil>}
I0929 08:56:32.275844  444734 retry.go:31] will retry after 169.663µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.281436  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0cc85dd-886a-4327-9dd7-fa2bd41dc4e9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a000 TLS:<nil>}
I0929 08:56:32.281514  444734 retry.go:31] will retry after 240.269µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.285042  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a4ea7fc-20e3-4512-bf02-196962dca198] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000728dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292500 TLS:<nil>}
I0929 08:56:32.285096  444734 retry.go:31] will retry after 472.575µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.288287  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[65b6d1e5-110e-41f4-a274-619478edaa92] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000728e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a140 TLS:<nil>}
I0929 08:56:32.288334  444734 retry.go:31] will retry after 415.052µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.291705  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f289cd53-87e7-4675-9169-89b93396a5fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000728f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a280 TLS:<nil>}
I0929 08:56:32.291764  444734 retry.go:31] will retry after 963.558µs: Temporary Error: unexpected response code: 503
I0929 08:56:32.295133  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[daff79c8-b0e7-42bc-8505-89bc661b3712] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00079df40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a3c0 TLS:<nil>}
I0929 08:56:32.295180  444734 retry.go:31] will retry after 1.245636ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.299464  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd962a58-9b6a-4ddc-9787-dcd87c9fdab8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000596f00 TLS:<nil>}
I0929 08:56:32.299518  444734 retry.go:31] will retry after 2.165636ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.304700  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e46a855c-928b-4421-8d26-ed2cc17420d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00077e0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292640 TLS:<nil>}
I0929 08:56:32.304745  444734 retry.go:31] will retry after 2.756848ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.310205  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70f017d2-e6e9-441c-a6d3-cd65ded10ac2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000729000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005972c0 TLS:<nil>}
I0929 08:56:32.310253  444734 retry.go:31] will retry after 4.084587ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.316503  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed2e2f0c-47e9-4bb9-9e36-083f08f2a2aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00077e1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a500 TLS:<nil>}
I0929 08:56:32.316546  444734 retry.go:31] will retry after 8.123559ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.328045  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7969f13-44ae-4306-8a40-b62edfcdaaa4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000597400 TLS:<nil>}
I0929 08:56:32.328105  444734 retry.go:31] will retry after 10.914329ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.342001  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4be4e433-aabf-461a-a5c6-16081b4d1e82] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002928c0 TLS:<nil>}
I0929 08:56:32.342061  444734 retry.go:31] will retry after 11.862602ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.357143  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc7abfef-d27b-4312-8c32-0a7a84afab7b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I0929 08:56:32.357227  444734 retry.go:31] will retry after 25.665916ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.386361  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57214532-a20f-41a0-a220-3dc98efec355] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00077e300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292dc0 TLS:<nil>}
I0929 08:56:32.386463  444734 retry.go:31] will retry after 28.019335ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.417525  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64cf4402-0419-4ff5-9847-e7cdf08d820f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000597900 TLS:<nil>}
I0929 08:56:32.417584  444734 retry.go:31] will retry after 32.869891ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.453572  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0c73b9ff-b2ac-4919-a102-d7f641f1ae20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000729140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I0929 08:56:32.453633  444734 retry.go:31] will retry after 60.183902ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.520015  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c926aa19-fc5b-4dc2-842b-8f72acdcc870] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00147e6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a640 TLS:<nil>}
I0929 08:56:32.520098  444734 retry.go:31] will retry after 80.377668ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.604406  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d18a13f-8499-4e48-bfd1-cde8e9f9cbcd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc000729240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I0929 08:56:32.604488  444734 retry.go:31] will retry after 129.101604ms: Temporary Error: unexpected response code: 503
I0929 08:56:32.736811  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ee04479-0002-4933-a464-cd67e648b2f6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:32 GMT]] Body:0xc00077e3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a780 TLS:<nil>}
I0929 08:56:32.736908  444734 retry.go:31] will retry after 331.248629ms: Temporary Error: unexpected response code: 503
I0929 08:56:33.071215  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c17625c-45a1-4511-b79b-3b4ed024a053] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:33 GMT]] Body:0xc00147e800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000597a40 TLS:<nil>}
I0929 08:56:33.071297  444734 retry.go:31] will retry after 342.867758ms: Temporary Error: unexpected response code: 503
I0929 08:56:33.417690  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[786c52ef-1dd9-4216-9216-a9575f7eb651] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:33 GMT]] Body:0xc00077e4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I0929 08:56:33.417751  444734 retry.go:31] will retry after 563.347478ms: Temporary Error: unexpected response code: 503
I0929 08:56:33.984435  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de3c14e0-c092-4bf0-8a6c-fb64685d2df7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:33 GMT]] Body:0xc000729300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000597b80 TLS:<nil>}
I0929 08:56:33.984513  444734 retry.go:31] will retry after 927.739513ms: Temporary Error: unexpected response code: 503
I0929 08:56:34.915506  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ffe251d-9617-4b03-b49b-866b63bc7f91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:34 GMT]] Body:0xc000729400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083a8c0 TLS:<nil>}
I0929 08:56:34.915569  444734 retry.go:31] will retry after 821.826248ms: Temporary Error: unexpected response code: 503
I0929 08:56:35.740528  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d7bc5570-0ff6-4f6a-a99f-9f99b46c78d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:35 GMT]] Body:0xc00077e600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083aa00 TLS:<nil>}
I0929 08:56:35.740611  444734 retry.go:31] will retry after 2.154788184s: Temporary Error: unexpected response code: 503
I0929 08:56:37.898301  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2832bd7d-c3aa-4628-9b93-5adcfe254323] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:37 GMT]] Body:0xc0007294c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000597cc0 TLS:<nil>}
I0929 08:56:37.898372  444734 retry.go:31] will retry after 1.561314796s: Temporary Error: unexpected response code: 503
I0929 08:56:39.464026  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[adbbc658-f481-468d-95db-49d9636aa0ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:39 GMT]] Body:0xc00147e900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083ab40 TLS:<nil>}
I0929 08:56:39.464107  444734 retry.go:31] will retry after 2.440437989s: Temporary Error: unexpected response code: 503
I0929 08:56:41.907507  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cef74f4a-29a3-4d83-a262-96e05eb24cd8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:41 GMT]] Body:0xc00077e700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I0929 08:56:41.907584  444734 retry.go:31] will retry after 4.544481166s: Temporary Error: unexpected response code: 503
I0929 08:56:46.455470  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0101525f-eb22-44f3-b959-a12c8b3bbf29] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:46 GMT]] Body:0xc00077e780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293e00 TLS:<nil>}
I0929 08:56:46.455537  444734 retry.go:31] will retry after 5.210018808s: Temporary Error: unexpected response code: 503
I0929 08:56:51.669534  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6e13876-09f8-47d8-ad7b-c8528c166204] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:56:51 GMT]] Body:0xc00147ea40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016ac000 TLS:<nil>}
I0929 08:56:51.669595  444734 retry.go:31] will retry after 18.553740103s: Temporary Error: unexpected response code: 503
I0929 08:57:10.227863  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aba5745c-c168-4502-a3ee-f5dd645ba057] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:57:10 GMT]] Body:0xc0008d4640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c000 TLS:<nil>}
I0929 08:57:10.227939  444734 retry.go:31] will retry after 16.052126894s: Temporary Error: unexpected response code: 503
I0929 08:57:26.284202  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d00146da-5555-4ca2-8160-b0d7144bd17d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:57:26 GMT]] Body:0xc00077e880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6140 TLS:<nil>}
I0929 08:57:26.284260  444734 retry.go:31] will retry after 29.980543026s: Temporary Error: unexpected response code: 503
I0929 08:57:56.268304  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a8e62e4-a791-4ad9-abcf-d78f73397a9c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:57:56 GMT]] Body:0xc0008d4780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016ac140 TLS:<nil>}
I0929 08:57:56.268373  444734 retry.go:31] will retry after 46.416356899s: Temporary Error: unexpected response code: 503
I0929 08:58:42.688044  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8ef5297-bc45-4756-8d2e-c2717faf123f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:58:42 GMT]] Body:0xc000728240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6280 TLS:<nil>}
I0929 08:58:42.688140  444734 retry.go:31] will retry after 56.785348861s: Temporary Error: unexpected response code: 503
I0929 08:59:39.479443  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7bb40197-37b6-4407-adb7-33e241b4a15c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 08:59:39 GMT]] Body:0xc0008d4280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083ac80 TLS:<nil>}
I0929 08:59:39.479522  444734 retry.go:31] will retry after 1m17.733905362s: Temporary Error: unexpected response code: 503
I0929 09:00:57.217160  444734 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f583b083-635a-4cfe-8bee-fe25b5303eb2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 09:00:57 GMT]] Body:0xc0008d4240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00083adc0 TLS:<nil>}
I0929 09:00:57.217247  444734 retry.go:31] will retry after 1m2.178405978s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-580781
helpers_test.go:243: (dbg) docker inspect functional-580781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	        "Created": "2025-09-29T08:48:33.034529223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:48:33.070958392Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hosts",
	        "LogPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0-json.log",
	        "Name": "/functional-580781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-580781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-580781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	                "LowerDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-580781",
	                "Source": "/var/lib/docker/volumes/functional-580781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-580781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-580781",
	                "name.minikube.sigs.k8s.io": "functional-580781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b37cbd8f035d18de42849ede2340b295b85fe84979fff6ab1cec7b19304cded",
	            "SandboxKey": "/var/run/docker/netns/5b37cbd8f035",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-580781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:69:98:c1:90:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "495c1eb850caf76b3c694e019686a6cae7865db2cadf61ef3a9e798cb0bdad99",
	                    "EndpointID": "8c180be2c2eda60e41070ee44e33e49d42b76851992a3e20cd0612627b94aff0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-580781",
	                        "38862aa7a2bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-580781 -n functional-580781
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs -n 25: (1.42516199s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh -- ls -la /mount-9p                                                                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh sudo umount -f /mount-9p                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount          │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount2 --alsologtostderr -v=1 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh            │ functional-580781 ssh findmnt -T /mount1                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount          │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount          │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh            │ functional-580781 ssh findmnt -T /mount1                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh findmnt -T /mount2                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh findmnt -T /mount3                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount          │ -p functional-580781 --kill=true                                                                                  │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-580781 --alsologtostderr -v=1                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format short --alsologtostderr                                                       │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format yaml --alsologtostderr                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ ssh            │ functional-580781 ssh pgrep buildkitd                                                                             │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │                     │
	│ image          │ functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr            │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls                                                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format json --alsologtostderr                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format table --alsologtostderr                                                       │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:56:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:56:30.117689  444267 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:56:30.117944  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.117953  444267 out.go:374] Setting ErrFile to fd 2...
	I0929 08:56:30.117957  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.118174  444267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:56:30.118633  444267 out.go:368] Setting JSON to false
	I0929 08:56:30.119597  444267 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9539,"bootTime":1759126651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:56:30.119701  444267 start.go:140] virtualization: kvm guest
	I0929 08:56:30.121870  444267 out.go:179] * [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:56:30.123224  444267 notify.go:220] Checking for updates...
	I0929 08:56:30.123247  444267 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:56:30.124827  444267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:56:30.126381  444267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:56:30.127858  444267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:56:30.129178  444267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:56:30.130586  444267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:56:30.132381  444267 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:56:30.132938  444267 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:56:30.157005  444267 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:56:30.157169  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.211025  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.201186795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.211131  444267 docker.go:318] overlay module found
	I0929 08:56:30.212976  444267 out.go:179] * Using the docker driver based on existing profile
	I0929 08:56:30.214014  444267 start.go:304] selected driver: docker
	I0929 08:56:30.214028  444267 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.214115  444267 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:56:30.214224  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.267814  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.258243285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.268554  444267 cni.go:84] Creating CNI manager for ""
	I0929 08:56:30.268633  444267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:56:30.268696  444267 start.go:348] cluster config:
	{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.270539  444267 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 09:00:34 functional-580781 crio[4228]: time="2025-09-29 09:00:34.438515015Z" level=info msg="Image docker.io/mysql:5.7 not found" id=26ab99b8-70b2-4154-9831-510579d3a09e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:37 functional-580781 crio[4228]: time="2025-09-29 09:00:37.437747460Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=3444eeda-48ff-48c5-b992-a1e638c0f970 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:37 functional-580781 crio[4228]: time="2025-09-29 09:00:37.438078122Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=3444eeda-48ff-48c5-b992-a1e638c0f970 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:41 functional-580781 crio[4228]: time="2025-09-29 09:00:41.404101607Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=55e40538-c24c-45d0-ae27-fa5cb87a849b name=/runtime.v1.ImageService/PullImage
	Sep 29 09:00:41 functional-580781 crio[4228]: time="2025-09-29 09:00:41.404879678Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=502c6137-c759-484d-928e-b9313709081d name=/runtime.v1.ImageService/PullImage
	Sep 29 09:00:41 functional-580781 crio[4228]: time="2025-09-29 09:00:41.409201647Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 09:00:47 functional-580781 crio[4228]: time="2025-09-29 09:00:47.437915432Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7b8357ea-d63c-41f3-95f7-f9af527e01d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:47 functional-580781 crio[4228]: time="2025-09-29 09:00:47.438197870Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7b8357ea-d63c-41f3-95f7-f9af527e01d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:53 functional-580781 crio[4228]: time="2025-09-29 09:00:53.438172420Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fcb93826-b8ca-4959-a05b-29797209fda7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:53 functional-580781 crio[4228]: time="2025-09-29 09:00:53.438421038Z" level=info msg="Image docker.io/nginx:alpine not found" id=fcb93826-b8ca-4959-a05b-29797209fda7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:59 functional-580781 crio[4228]: time="2025-09-29 09:00:59.439943445Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=0293088d-eda1-4f54-8c54-26b8415c1697 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:59 functional-580781 crio[4228]: time="2025-09-29 09:00:59.440276790Z" level=info msg="Image docker.io/mysql:5.7 not found" id=0293088d-eda1-4f54-8c54-26b8415c1697 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:05 functional-580781 crio[4228]: time="2025-09-29 09:01:05.438502303Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8e8e340f-ed41-420f-b3d4-6edcb415ba6b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:05 functional-580781 crio[4228]: time="2025-09-29 09:01:05.438780207Z" level=info msg="Image docker.io/nginx:alpine not found" id=8e8e340f-ed41-420f-b3d4-6edcb415ba6b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:12 functional-580781 crio[4228]: time="2025-09-29 09:01:12.060152329Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f48bbe7d-89dd-48d6-8e5e-bd99e5327782 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:01:12 functional-580781 crio[4228]: time="2025-09-29 09:01:12.060890576Z" level=info msg="Pulling image: docker.io/nginx:latest" id=3c317567-2844-439f-94a7-4af5fa052909 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:01:12 functional-580781 crio[4228]: time="2025-09-29 09:01:12.062233885Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 09:01:14 functional-580781 crio[4228]: time="2025-09-29 09:01:14.437612664Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=922141c8-6515-47ef-b25e-57ab22dc83d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:14 functional-580781 crio[4228]: time="2025-09-29 09:01:14.437827176Z" level=info msg="Image docker.io/mysql:5.7 not found" id=922141c8-6515-47ef-b25e-57ab22dc83d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:20 functional-580781 crio[4228]: time="2025-09-29 09:01:20.438498127Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=20d0f9fc-a648-4423-931c-570c24e49e13 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:20 functional-580781 crio[4228]: time="2025-09-29 09:01:20.438735398Z" level=info msg="Image docker.io/nginx:alpine not found" id=20d0f9fc-a648-4423-931c-570c24e49e13 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:25 functional-580781 crio[4228]: time="2025-09-29 09:01:25.438363675Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=38c80df2-e9e5-41ee-89fc-9c87736b5523 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:25 functional-580781 crio[4228]: time="2025-09-29 09:01:25.438714367Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=38c80df2-e9e5-41ee-89fc-9c87736b5523 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:27 functional-580781 crio[4228]: time="2025-09-29 09:01:27.437986087Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7b06b3b6-10f0-4e9a-aa7e-bc8859667e3b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:01:27 functional-580781 crio[4228]: time="2025-09-29 09:01:27.438266027Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7b06b3b6-10f0-4e9a-aa7e-bc8859667e3b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db229b500cea2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   a56edad455b36       busybox-mount
	3201afa40ac94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      11 minutes ago      Running             kube-apiserver            0                   82f71d0ce1af3       kube-apiserver-functional-580781
	346cf15effa51       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Running             kube-scheduler            1                   56f4894c02564       kube-scheduler-functional-580781
	47f1c99fd1006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Running             kube-controller-manager   2                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	06427c125c739       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Running             etcd                      1                   0823e3669f061       etcd-functional-580781
	1a6c4fa503da3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   1                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	ef2ab2b48d81a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Running             kube-proxy                1                   630401fd11ff4       kube-proxy-7zlkp
	419813926dfe4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   c865c04855dee       kindnet-pnn6t
	3ba534cc9995f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Running             storage-provisioner       1                   572ac443fe212       storage-provisioner
	0c420a09ed822       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   1                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	49f5f6ce9ff79       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 minutes ago      Exited              coredns                   0                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	8fa1b4de8244f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Exited              storage-provisioner       0                   572ac443fe212       storage-provisioner
	1bfc7f0b08c9e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      12 minutes ago      Exited              kindnet-cni               0                   c865c04855dee       kindnet-pnn6t
	3cf0b4c8c0eff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      12 minutes ago      Exited              kube-proxy                0                   630401fd11ff4       kube-proxy-7zlkp
	83f4e402f8920       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      0                   0823e3669f061       etcd-functional-580781
	31ff02ffd0a6d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      12 minutes ago      Exited              kube-scheduler            0                   56f4894c02564       kube-scheduler-functional-580781
	
	
	==> coredns [0c420a09ed82237c3eba1aa280297cf3d6eef42b2c186b93991ad924d809a5b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32989 - 54781 "HINFO IN 1322808675416363747.3298756715011358413. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079188165s
	
	
	==> coredns [49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39260 - 3064 "HINFO IN 8182008874646901959.6041357028063081178. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094703399s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-580781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-580781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=functional-580781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_48_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-580781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:01:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:01:00 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:01:00 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:01:00 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:01:00 +0000   Mon, 29 Sep 2025 08:49:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-580781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 565a9e40e71a440f889c5f66396fc290
	  System UUID:                10e5194d-9350-4f16-9277-d0c31ca42e51
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rxhk2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  default                     hello-node-connect-7d85dfc575-thgc5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  default                     mysql-5bb876957f-g7nlv                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     11m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-qn4f9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-580781                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-pnn6t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-580781              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-580781     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7zlkp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-580781              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m95gr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vt9lx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	  Normal  NodeReady                12m                kubelet          Node functional-580781 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [06427c125c739d8a8454d779cd4b1110ffca144587807bfc615ab7ba3aa85f21] <==
	{"level":"warn","ts":"2025-09-29T08:49:56.759252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.765404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.771892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.778356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.784387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.791034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.804065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.811416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.818910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.827589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.835603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.842079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.849060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.855818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.868051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.874174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.898754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.904987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.911930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.955188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:59:56.447779Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":998}
	{"level":"info","ts":"2025-09-29T08:59:56.456064Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":998,"took":"7.926939ms","hash":2515072890,"current-db-size-bytes":3457024,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":3457024,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2025-09-29T08:59:56.456115Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2515072890,"revision":998,"compact-revision":-1}
	
	
	==> etcd [83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4] <==
	{"level":"warn","ts":"2025-09-29T08:48:48.105391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.112542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.118842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.131384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.137802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.144433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.191696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:49:36.606109Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T08:49:36.606181Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T08:49:36.606264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608389Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T08:49:43.608451Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608422Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608436Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608489Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T08:49:43.608500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T08:49:43.608502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608467Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T08:49:43.610858Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T08:49:43.611011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.611040Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T08:49:43.611047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:01:32 up  2:44,  0 users,  load average: 0.31, 0.27, 0.38
	Linux functional-580781 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0] <==
	I0929 08:48:57.458656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 08:48:57.458926       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 08:48:57.459093       1 main.go:148] setting mtu 1500 for CNI 
	I0929 08:48:57.459112       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 08:48:57.459139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T08:48:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 08:48:57.660610       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 08:48:57.660631       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 08:48:57.660640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 08:48:57.754818       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 08:48:58.060813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 08:48:58.060862       1 metrics.go:72] Registering metrics
	I0929 08:48:58.060920       1 controller.go:711] "Syncing nftables rules"
	I0929 08:49:07.661018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:07.661159       1 main.go:301] handling current node
	I0929 08:49:17.661209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:17.661245       1 main.go:301] handling current node
	I0929 08:49:27.665005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:27.665053       1 main.go:301] handling current node
	
	
	==> kindnet [419813926dfe4f3e19e4ed90e311ff20fe542f74f8ebf0dc42045be7549c7203] <==
	I0929 08:59:27.807180       1 main.go:301] handling current node
	I0929 08:59:37.803327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:37.803368       1 main.go:301] handling current node
	I0929 08:59:47.802970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:47.803008       1 main.go:301] handling current node
	I0929 08:59:57.802737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:57.802785       1 main.go:301] handling current node
	I0929 09:00:07.802977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:07.803007       1 main.go:301] handling current node
	I0929 09:00:17.810906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:17.810936       1 main.go:301] handling current node
	I0929 09:00:27.802931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:27.802992       1 main.go:301] handling current node
	I0929 09:00:37.803123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:37.803156       1 main.go:301] handling current node
	I0929 09:00:47.803516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:47.803554       1 main.go:301] handling current node
	I0929 09:00:57.803337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:57.803398       1 main.go:301] handling current node
	I0929 09:01:07.803437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:01:07.803472       1 main.go:301] handling current node
	I0929 09:01:17.802825       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:01:17.802874       1 main.go:301] handling current node
	I0929 09:01:27.802637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:01:27.802679       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3201afa40ac947ad27f530616359700f2260d511660f89535877216d9ccda60f] <==
	I0929 08:49:57.427650       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 08:49:57.427655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 08:49:57.427659       1 cache.go:39] Caches are synced for autoregister controller
	I0929 08:49:57.428967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 08:49:57.428988       1 policy_source.go:240] refreshing policies
	I0929 08:49:57.451215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 08:49:57.452554       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 08:49:58.320496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0929 08:49:58.525606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 08:49:58.526853       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 08:49:58.531444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 08:49:59.297430       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 08:49:59.400126       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 08:49:59.467824       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 08:49:59.473940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 08:50:01.045050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 08:50:15.663020       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.209.181"}
	I0929 08:50:19.847113       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.150.212"}
	I0929 08:50:21.656932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.202.166"}
	I0929 08:52:30.762468       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.76.169"}
	I0929 08:55:57.798872       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.203.171"}
	I0929 08:56:32.068798       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 08:56:32.186099       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.247.251"}
	I0929 08:56:32.201368       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.131.220"}
	I0929 08:59:57.353812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1a6c4fa503da3ece68dc966f8fd6d8ebafc5d006b9831ba53bd6369943bfd8a8] <==
	I0929 08:49:46.426216       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0929 08:49:46.426241       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I0929 08:49:46.477303       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0929 08:49:46.477328       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:49:46.477387       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.477602       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0929 08:49:46.477627       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I0929 08:49:46.477678       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478062       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0929 08:49:46.478084       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:49:46.478100       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478446       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0929 08:49:46.478471       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0929 08:49:46.478527       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0929 08:49:46.478544       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:49:46.478586       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.525502       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I0929 08:49:46.525575       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0929 08:49:46.525583       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I0929 08:49:46.576219       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I0929 08:49:46.576246       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0929 08:49:46.576263       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I0929 08:49:46.576298       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0929 08:49:46.576312       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	F0929 08:49:47.625587       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [47f1c99fd1006fd2040b7a6a3a2e570a4c9366287bc4a9bb519ddf562e9c5ea9] <==
	I0929 08:50:00.740244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:50:00.740360       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 08:50:00.741441       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 08:50:00.741457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 08:50:00.741489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:50:00.741500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 08:50:00.741534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 08:50:00.741600       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 08:50:00.741671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 08:50:00.741777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-580781"
	I0929 08:50:00.741828       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 08:50:00.742012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:50:00.743274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:50:00.743330       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:50:00.743372       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 08:50:00.744533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:50:00.744549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:50:00.745371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:50:00.763731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:56:32.141757       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.141939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.145888       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.146332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.150732       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.151510       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510] <==
	I0929 08:48:57.328633       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:48:57.398736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:48:57.499156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:48:57.499205       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:48:57.499363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:48:57.517179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:48:57.517238       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:48:57.522369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:48:57.522730       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:48:57.522759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:48:57.524004       1 config.go:200] "Starting service config controller"
	I0929 08:48:57.524408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:48:57.524611       1 config.go:309] "Starting node config controller"
	I0929 08:48:57.524638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:48:57.525031       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:48:57.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:48:57.525096       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:48:57.525103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:48:57.624518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:48:57.625676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:48:57.625779       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:48:57.625802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ef2ab2b48d81ada5a6d38c217b125bc7066f486fe3d353763fa03f3e46cf1062] <==
	I0929 08:49:37.495720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:49:37.595898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:49:37.595958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:49:37.596323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:49:37.616663       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:49:37.616736       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:49:37.622131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:49:37.622572       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:49:37.622607       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:37.623810       1 config.go:200] "Starting service config controller"
	I0929 08:49:37.623827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:49:37.623926       1 config.go:309] "Starting node config controller"
	I0929 08:49:37.623973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:49:37.624025       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:49:37.624039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:49:37.624063       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:49:37.624068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:49:37.724863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:49:37.724889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:49:37.724902       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 08:49:37.724927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="nodes \"functional-580781\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:49:57.362283       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:49:57.362240       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8] <==
	E0929 08:48:48.629336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:48.629343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:48.629190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:48:48.629108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:48.629583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:48.629604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.481623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 08:48:49.529255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 08:48:49.592487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:49.604559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:49.694389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:48:49.697302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:48:49.731820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:48:49.745001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:48:49.759498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:49.789574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.801523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:49.827015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 08:48:50.224669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 08:49:53.820584       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820641       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 08:49:53.820662       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 08:49:53.820691       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 08:49:53.820718       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [346cf15effa5119adbb50a15e72686cb099db1666fa69bfc2a68c8fe414f1503] <==
	I0929 08:49:56.473573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 08:49:57.340173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 08:49:57.340209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 08:49:57.340222       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 08:49:57.340232       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 08:49:57.364559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 08:49:57.364580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:57.366868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.366910       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.367205       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 08:49:57.367245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 08:49:57.467721       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:00:55 functional-580781 kubelet[5417]: E0929 09:00:55.534708    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136455534471422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:00:55 functional-580781 kubelet[5417]: E0929 09:00:55.534747    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136455534471422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:00:59 functional-580781 kubelet[5417]: E0929 09:00:59.440580    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:01:05 functional-580781 kubelet[5417]: E0929 09:01:05.438540    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-thgc5" podUID="fa8e859c-e2eb-4366-bf33-3fbbc9df80d6"
	Sep 29 09:01:05 functional-580781 kubelet[5417]: E0929 09:01:05.439119    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="29b039fe-6e18-4585-9490-7bba9fa796cf"
	Sep 29 09:01:05 functional-580781 kubelet[5417]: E0929 09:01:05.536164    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136465535938714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:05 functional-580781 kubelet[5417]: E0929 09:01:05.536198    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136465535938714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.059670    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.059733    5417 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.059992    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-vt9lx_kubernetes-dashboard(f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.060051    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vt9lx" podUID="f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.060518    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.060562    5417 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.060712    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-rxhk2_default(c14b0343-8ceb-4ede-99c3-a1a1c337e9ab): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 29 09:01:12 functional-580781 kubelet[5417]: E0929 09:01:12.062015    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 09:01:14 functional-580781 kubelet[5417]: E0929 09:01:14.438177    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:01:15 functional-580781 kubelet[5417]: E0929 09:01:15.537550    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136475537333186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:15 functional-580781 kubelet[5417]: E0929 09:01:15.537590    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136475537333186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:16 functional-580781 kubelet[5417]: E0929 09:01:16.437514    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-thgc5" podUID="fa8e859c-e2eb-4366-bf33-3fbbc9df80d6"
	Sep 29 09:01:20 functional-580781 kubelet[5417]: E0929 09:01:20.439082    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="29b039fe-6e18-4585-9490-7bba9fa796cf"
	Sep 29 09:01:23 functional-580781 kubelet[5417]: E0929 09:01:23.437671    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 09:01:25 functional-580781 kubelet[5417]: E0929 09:01:25.439025    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vt9lx" podUID="f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178"
	Sep 29 09:01:25 functional-580781 kubelet[5417]: E0929 09:01:25.538916    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136485538690428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:25 functional-580781 kubelet[5417]: E0929 09:01:25.538951    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136485538690428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:01:27 functional-580781 kubelet[5417]: E0929 09:01:27.438564    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	
	
	==> storage-provisioner [3ba534cc9995fbd82b83b955735dab9de1c54de1d8fd7119eccb782d77fe63fd] <==
	W0929 09:01:07.580178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:09.583292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:09.586899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:11.589938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:11.595100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:13.598412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:13.602342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:15.605950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:15.609606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:17.612424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:17.616247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:19.619594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:19.623894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:21.626794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:21.630615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:23.633963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:23.637967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:25.640937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:25.644618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:27.647848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:27.651537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:29.654721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:29.658717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:31.661897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:01:31.666412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae] <==
	W0929 08:49:10.392985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.396758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.400952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.404972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.410240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.414556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.418891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.422630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.426730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.430506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.434896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.437786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.441661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.444570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.448383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.451410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.456736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.460215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.464644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.467426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.475151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.478302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.482203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.485888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.489874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
helpers_test.go:269: (dbg) Run:  kubectl --context functional-580781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1 (98.48383ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://db229b500cea2a9d934455d2b9a59a2e28deb77a8bbc7c217b4b73c4b22b9246
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 08:52:24 +0000
	      Finished:     Mon, 29 Sep 2025 08:52:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgs2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qgs2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/busybox-mount to functional-580781
	  Normal  Pulling    11m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m9s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.114s (1m53.786s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m9s  kubelet            Created container: mount-munger
	  Normal  Started    9m9s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rxhk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:52:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j626 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8j626:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m3s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rxhk2 to functional-580781
	  Normal   Pulling    100s (x5 over 9m2s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     21s (x5 over 8m2s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     21s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x12 over 8m2s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x12 over 8m2s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-thgc5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5cvn7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m36s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
	  Warning  Failed     52s (x3 over 4m43s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     52s (x3 over 4m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    17s (x5 over 4m43s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     17s (x5 over 4m43s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    5s (x4 over 5m35s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-g7nlv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:19 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnqlc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pnqlc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-g7nlv to functional-580781
	  Warning  Failed     10m                   kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m51s (x5 over 11m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m8s (x4 over 8m39s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x21 over 10m)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6s (x21 over 10m)     kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpvw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpvw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/nginx-svc to functional-580781
	  Warning  Failed     8m2s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m20s (x5 over 11m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     52s (x4 over 9m41s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x5 over 9m41s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x13 over 9m41s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s (x13 over 9m41s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpg5f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kpg5f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  11m                    default-scheduler  Successfully assigned default/sp-pod to functional-580781
	  Warning  Failed     5m45s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m9s (x3 over 9m10s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m9s (x4 over 9m10s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    113s (x11 over 9m10s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     113s (x11 over 9m10s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x5 over 11m)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m95gr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vt9lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-580781 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-580781 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-thgc5" [fa8e859c-e2eb-4366-bf33-3fbbc9df80d6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 08:56:15.699935  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 09:05:58.116394233 +0000 UTC m=+2205.763019701
functional_test.go:1645: (dbg) Run:  kubectl --context functional-580781 describe po hello-node-connect-7d85dfc575-thgc5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-580781 describe po hello-node-connect-7d85dfc575-thgc5 -n default:
Name:             hello-node-connect-7d85dfc575-thgc5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5cvn7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
Normal   Pulling    2m12s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     86s (x5 over 9m8s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     86s (x5 over 9m8s)   kubelet            Error: ErrImagePull
Warning  Failed     24s (x16 over 9m8s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    9s (x17 over 9m8s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-580781 logs hello-node-connect-7d85dfc575-thgc5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-580781 logs hello-node-connect-7d85dfc575-thgc5 -n default: exit status 1 (70.218513ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-thgc5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-580781 logs hello-node-connect-7d85dfc575-thgc5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-580781 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-thgc5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5cvn7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
Normal   Pulling    2m12s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     86s (x5 over 9m8s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     86s (x5 over 9m8s)   kubelet            Error: ErrImagePull
Warning  Failed     24s (x16 over 9m8s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    9s (x17 over 9m8s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-580781 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-580781 logs -l app=hello-node-connect: exit status 1 (63.104883ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-thgc5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-580781 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-580781 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.203.171
IPs:                      10.99.203.171
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30704/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-580781
helpers_test.go:243: (dbg) docker inspect functional-580781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	        "Created": "2025-09-29T08:48:33.034529223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:48:33.070958392Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hosts",
	        "LogPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0-json.log",
	        "Name": "/functional-580781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-580781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-580781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	                "LowerDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-580781",
	                "Source": "/var/lib/docker/volumes/functional-580781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-580781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-580781",
	                "name.minikube.sigs.k8s.io": "functional-580781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b37cbd8f035d18de42849ede2340b295b85fe84979fff6ab1cec7b19304cded",
	            "SandboxKey": "/var/run/docker/netns/5b37cbd8f035",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-580781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:69:98:c1:90:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "495c1eb850caf76b3c694e019686a6cae7865db2cadf61ef3a9e798cb0bdad99",
	                    "EndpointID": "8c180be2c2eda60e41070ee44e33e49d42b76851992a3e20cd0612627b94aff0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-580781",
	                        "38862aa7a2bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-580781 -n functional-580781
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs -n 25: (1.499442411s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount          │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh            │ functional-580781 ssh findmnt -T /mount1                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh findmnt -T /mount2                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh            │ functional-580781 ssh findmnt -T /mount3                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount          │ -p functional-580781 --kill=true                                                                                  │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start          │ -p functional-580781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-580781 --alsologtostderr -v=1                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ update-context │ functional-580781 update-context --alsologtostderr -v=2                                                           │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format short --alsologtostderr                                                       │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format yaml --alsologtostderr                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ ssh            │ functional-580781 ssh pgrep buildkitd                                                                             │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │                     │
	│ image          │ functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr            │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls                                                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format json --alsologtostderr                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ image          │ functional-580781 image ls --format table --alsologtostderr                                                       │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:00 UTC │ 29 Sep 25 09:00 UTC │
	│ service        │ functional-580781 service list                                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:02 UTC │ 29 Sep 25 09:02 UTC │
	│ service        │ functional-580781 service list -o json                                                                            │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:02 UTC │ 29 Sep 25 09:02 UTC │
	│ service        │ functional-580781 service --namespace=default --https --url hello-node                                            │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:02 UTC │                     │
	│ service        │ functional-580781 service hello-node --url --format={{.IP}}                                                       │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:02 UTC │                     │
	│ service        │ functional-580781 service hello-node --url                                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 09:02 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:56:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:56:30.117689  444267 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:56:30.117944  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.117953  444267 out.go:374] Setting ErrFile to fd 2...
	I0929 08:56:30.117957  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.118174  444267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:56:30.118633  444267 out.go:368] Setting JSON to false
	I0929 08:56:30.119597  444267 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9539,"bootTime":1759126651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:56:30.119701  444267 start.go:140] virtualization: kvm guest
	I0929 08:56:30.121870  444267 out.go:179] * [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:56:30.123224  444267 notify.go:220] Checking for updates...
	I0929 08:56:30.123247  444267 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:56:30.124827  444267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:56:30.126381  444267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:56:30.127858  444267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:56:30.129178  444267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:56:30.130586  444267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:56:30.132381  444267 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:56:30.132938  444267 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:56:30.157005  444267 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:56:30.157169  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.211025  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.201186795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.211131  444267 docker.go:318] overlay module found
	I0929 08:56:30.212976  444267 out.go:179] * Using the docker driver based on existing profile
	I0929 08:56:30.214014  444267 start.go:304] selected driver: docker
	I0929 08:56:30.214028  444267 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.214115  444267 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:56:30.214224  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.267814  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.258243285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.268554  444267 cni.go:84] Creating CNI manager for ""
	I0929 08:56:30.268633  444267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:56:30.268696  444267 start.go:348] cluster config:
	{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.270539  444267 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 09:05:15 functional-580781 crio[4228]: time="2025-09-29 09:05:15.438713814Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c99f1bb6-e443-4418-b049-a0fbc94f9680 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:15 functional-580781 crio[4228]: time="2025-09-29 09:05:15.438778577Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=84f114f5-c72d-4c43-afd7-df1aafa157ac name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:15 functional-580781 crio[4228]: time="2025-09-29 09:05:15.438958734Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c99f1bb6-e443-4418-b049-a0fbc94f9680 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:18 functional-580781 crio[4228]: time="2025-09-29 09:05:18.438511803Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=cc1ac655-1ed4-416f-90d2-557d071d1878 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:18 functional-580781 crio[4228]: time="2025-09-29 09:05:18.438724486Z" level=info msg="Image docker.io/mysql:5.7 not found" id=cc1ac655-1ed4-416f-90d2-557d071d1878 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:27 functional-580781 crio[4228]: time="2025-09-29 09:05:27.438452670Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7e159eea-fdce-43df-ac28-45f2b221f958 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:27 functional-580781 crio[4228]: time="2025-09-29 09:05:27.438669415Z" level=info msg="Image docker.io/nginx:alpine not found" id=7e159eea-fdce-43df-ac28-45f2b221f958 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:30 functional-580781 crio[4228]: time="2025-09-29 09:05:30.437770444Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7f66bc4b-1140-42b3-8849-10c56c679794 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:30 functional-580781 crio[4228]: time="2025-09-29 09:05:30.438056471Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=7f66bc4b-1140-42b3-8849-10c56c679794 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:31 functional-580781 crio[4228]: time="2025-09-29 09:05:31.437910616Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=87b748fa-419b-41e6-8a56-a62eb3cd9b60 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:31 functional-580781 crio[4228]: time="2025-09-29 09:05:31.438192836Z" level=info msg="Image docker.io/mysql:5.7 not found" id=87b748fa-419b-41e6-8a56-a62eb3cd9b60 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:34 functional-580781 crio[4228]: time="2025-09-29 09:05:34.106562886Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=b97c3754-194f-4d27-9094-c4f436dfc544 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:05:34 functional-580781 crio[4228]: time="2025-09-29 09:05:34.107803190Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438258554Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=bd0eee6f-de67-4463-9995-b508ed9a9119 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438264726Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2c37f840-4ff1-41df-8983-a5d38120b3be name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438264453Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=eb1a8ead-c769-41b4-bb3b-83d804ab0774 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438510087Z" level=info msg="Image docker.io/mysql:5.7 not found" id=bd0eee6f-de67-4463-9995-b508ed9a9119 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438558726Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=eb1a8ead-c769-41b4-bb3b-83d804ab0774 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:42 functional-580781 crio[4228]: time="2025-09-29 09:05:42.438640786Z" level=info msg="Image docker.io/nginx:alpine not found" id=2c37f840-4ff1-41df-8983-a5d38120b3be name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:54 functional-580781 crio[4228]: time="2025-09-29 09:05:54.438520893Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=773c8638-e438-4208-9617-c80cee9be4da name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:54 functional-580781 crio[4228]: time="2025-09-29 09:05:54.438560912Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1ff4600d-c259-453e-9ac9-7c4bad1b1fd1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:54 functional-580781 crio[4228]: time="2025-09-29 09:05:54.438778467Z" level=info msg="Image docker.io/mysql:5.7 not found" id=773c8638-e438-4208-9617-c80cee9be4da name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:54 functional-580781 crio[4228]: time="2025-09-29 09:05:54.438897381Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1ff4600d-c259-453e-9ac9-7c4bad1b1fd1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:57 functional-580781 crio[4228]: time="2025-09-29 09:05:57.438518542Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d15902e5-e92b-41ff-b327-986a1609b466 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:05:57 functional-580781 crio[4228]: time="2025-09-29 09:05:57.438788564Z" level=info msg="Image docker.io/nginx:alpine not found" id=d15902e5-e92b-41ff-b327-986a1609b466 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db229b500cea2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Exited              mount-munger              0                   a56edad455b36       busybox-mount
	3201afa40ac94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      16 minutes ago      Running             kube-apiserver            0                   82f71d0ce1af3       kube-apiserver-functional-580781
	346cf15effa51       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      16 minutes ago      Running             kube-scheduler            1                   56f4894c02564       kube-scheduler-functional-580781
	47f1c99fd1006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Running             kube-controller-manager   2                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	06427c125c739       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Running             etcd                      1                   0823e3669f061       etcd-functional-580781
	1a6c4fa503da3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Exited              kube-controller-manager   1                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	ef2ab2b48d81a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      16 minutes ago      Running             kube-proxy                1                   630401fd11ff4       kube-proxy-7zlkp
	419813926dfe4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      16 minutes ago      Running             kindnet-cni               1                   c865c04855dee       kindnet-pnn6t
	3ba534cc9995f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       1                   572ac443fe212       storage-provisioner
	0c420a09ed822       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Running             coredns                   1                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	49f5f6ce9ff79       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Exited              coredns                   0                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	8fa1b4de8244f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       0                   572ac443fe212       storage-provisioner
	1bfc7f0b08c9e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Exited              kindnet-cni               0                   c865c04855dee       kindnet-pnn6t
	3cf0b4c8c0eff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      17 minutes ago      Exited              kube-proxy                0                   630401fd11ff4       kube-proxy-7zlkp
	83f4e402f8920       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Exited              etcd                      0                   0823e3669f061       etcd-functional-580781
	31ff02ffd0a6d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      17 minutes ago      Exited              kube-scheduler            0                   56f4894c02564       kube-scheduler-functional-580781
	
	
	==> coredns [0c420a09ed82237c3eba1aa280297cf3d6eef42b2c186b93991ad924d809a5b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32989 - 54781 "HINFO IN 1322808675416363747.3298756715011358413. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079188165s
	
	
	==> coredns [49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39260 - 3064 "HINFO IN 8182008874646901959.6041357028063081178. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094703399s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-580781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-580781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=functional-580781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_48_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-580781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:04:03 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:04:03 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:04:03 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:04:03 +0000   Mon, 29 Sep 2025 08:49:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-580781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 565a9e40e71a440f889c5f66396fc290
	  System UUID:                10e5194d-9350-4f16-9277-d0c31ca42e51
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rxhk2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-node-connect-7d85dfc575-thgc5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-g7nlv                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-qn4f9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-580781                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-pnn6t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-580781              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-580781     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-7zlkp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-580781              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m95gr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vt9lx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	  Normal  NodeReady                16m                kubelet          Node functional-580781 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [06427c125c739d8a8454d779cd4b1110ffca144587807bfc615ab7ba3aa85f21] <==
	{"level":"warn","ts":"2025-09-29T08:49:56.778356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.784387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.791034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.804065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.811416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.818910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.827589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.835603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.842079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.849060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.855818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.868051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.874174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.898754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.904987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.911930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.955188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:59:56.447779Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":998}
	{"level":"info","ts":"2025-09-29T08:59:56.456064Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":998,"took":"7.926939ms","hash":2515072890,"current-db-size-bytes":3457024,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":3457024,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2025-09-29T08:59:56.456115Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2515072890,"revision":998,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T09:04:56.452306Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1470}
	{"level":"info","ts":"2025-09-29T09:04:56.456338Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1470,"took":"3.700305ms","hash":2186745829,"current-db-size-bytes":3457024,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-09-29T09:04:56.456378Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2186745829,"revision":1470,"compact-revision":998}
	
	
	==> etcd [83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4] <==
	{"level":"warn","ts":"2025-09-29T08:48:48.105391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.112542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.118842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.131384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.137802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.144433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.191696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:49:36.606109Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T08:49:36.606181Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T08:49:36.606264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608389Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T08:49:43.608451Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608422Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608436Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608489Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T08:49:43.608500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T08:49:43.608502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608467Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T08:49:43.610858Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T08:49:43.611011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.611040Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T08:49:43.611047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:05:59 up  2:48,  0 users,  load average: 0.33, 0.28, 0.35
	Linux functional-580781 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0] <==
	I0929 08:48:57.458656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 08:48:57.458926       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 08:48:57.459093       1 main.go:148] setting mtu 1500 for CNI 
	I0929 08:48:57.459112       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 08:48:57.459139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T08:48:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 08:48:57.660610       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 08:48:57.660631       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 08:48:57.660640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 08:48:57.754818       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 08:48:58.060813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 08:48:58.060862       1 metrics.go:72] Registering metrics
	I0929 08:48:58.060920       1 controller.go:711] "Syncing nftables rules"
	I0929 08:49:07.661018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:07.661159       1 main.go:301] handling current node
	I0929 08:49:17.661209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:17.661245       1 main.go:301] handling current node
	I0929 08:49:27.665005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:27.665053       1 main.go:301] handling current node
	
	
	==> kindnet [419813926dfe4f3e19e4ed90e311ff20fe542f74f8ebf0dc42045be7549c7203] <==
	I0929 09:03:57.803715       1 main.go:301] handling current node
	I0929 09:04:07.805643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:07.805700       1 main.go:301] handling current node
	I0929 09:04:17.806425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:17.806461       1 main.go:301] handling current node
	I0929 09:04:27.805497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:27.805567       1 main.go:301] handling current node
	I0929 09:04:37.803901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:37.803938       1 main.go:301] handling current node
	I0929 09:04:47.802801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:47.802871       1 main.go:301] handling current node
	I0929 09:04:57.803368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:04:57.803422       1 main.go:301] handling current node
	I0929 09:05:07.803716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:07.803754       1 main.go:301] handling current node
	I0929 09:05:17.804394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:17.804442       1 main.go:301] handling current node
	I0929 09:05:27.805932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:27.805998       1 main.go:301] handling current node
	I0929 09:05:37.804097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:37.804136       1 main.go:301] handling current node
	I0929 09:05:47.803455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:47.803525       1 main.go:301] handling current node
	I0929 09:05:57.803654       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:05:57.803700       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3201afa40ac947ad27f530616359700f2260d511660f89535877216d9ccda60f] <==
	I0929 08:49:57.427650       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 08:49:57.427655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 08:49:57.427659       1 cache.go:39] Caches are synced for autoregister controller
	I0929 08:49:57.428967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 08:49:57.428988       1 policy_source.go:240] refreshing policies
	I0929 08:49:57.451215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 08:49:57.452554       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 08:49:58.320496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0929 08:49:58.525606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 08:49:58.526853       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 08:49:58.531444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 08:49:59.297430       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 08:49:59.400126       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 08:49:59.467824       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 08:49:59.473940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 08:50:01.045050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 08:50:15.663020       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.209.181"}
	I0929 08:50:19.847113       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.150.212"}
	I0929 08:50:21.656932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.202.166"}
	I0929 08:52:30.762468       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.76.169"}
	I0929 08:55:57.798872       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.203.171"}
	I0929 08:56:32.068798       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 08:56:32.186099       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.247.251"}
	I0929 08:56:32.201368       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.131.220"}
	I0929 08:59:57.353812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1a6c4fa503da3ece68dc966f8fd6d8ebafc5d006b9831ba53bd6369943bfd8a8] <==
	I0929 08:49:46.426216       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0929 08:49:46.426241       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I0929 08:49:46.477303       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0929 08:49:46.477328       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:49:46.477387       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.477602       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0929 08:49:46.477627       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I0929 08:49:46.477678       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478062       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0929 08:49:46.478084       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:49:46.478100       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478446       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0929 08:49:46.478471       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0929 08:49:46.478527       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0929 08:49:46.478544       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:49:46.478586       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.525502       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I0929 08:49:46.525575       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0929 08:49:46.525583       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I0929 08:49:46.576219       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I0929 08:49:46.576246       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0929 08:49:46.576263       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I0929 08:49:46.576298       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0929 08:49:46.576312       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	F0929 08:49:47.625587       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [47f1c99fd1006fd2040b7a6a3a2e570a4c9366287bc4a9bb519ddf562e9c5ea9] <==
	I0929 08:50:00.740244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:50:00.740360       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 08:50:00.741441       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 08:50:00.741457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 08:50:00.741489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:50:00.741500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 08:50:00.741534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 08:50:00.741600       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 08:50:00.741671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 08:50:00.741777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-580781"
	I0929 08:50:00.741828       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 08:50:00.742012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:50:00.743274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:50:00.743330       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:50:00.743372       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 08:50:00.744533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:50:00.744549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:50:00.745371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:50:00.763731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:56:32.141757       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.141939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.145888       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.146332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.150732       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.151510       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510] <==
	I0929 08:48:57.328633       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:48:57.398736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:48:57.499156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:48:57.499205       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:48:57.499363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:48:57.517179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:48:57.517238       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:48:57.522369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:48:57.522730       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:48:57.522759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:48:57.524004       1 config.go:200] "Starting service config controller"
	I0929 08:48:57.524408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:48:57.524611       1 config.go:309] "Starting node config controller"
	I0929 08:48:57.524638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:48:57.525031       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:48:57.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:48:57.525096       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:48:57.525103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:48:57.624518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:48:57.625676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:48:57.625779       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:48:57.625802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ef2ab2b48d81ada5a6d38c217b125bc7066f486fe3d353763fa03f3e46cf1062] <==
	I0929 08:49:37.495720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:49:37.595898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:49:37.595958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:49:37.596323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:49:37.616663       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:49:37.616736       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:49:37.622131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:49:37.622572       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:49:37.622607       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:37.623810       1 config.go:200] "Starting service config controller"
	I0929 08:49:37.623827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:49:37.623926       1 config.go:309] "Starting node config controller"
	I0929 08:49:37.623973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:49:37.624025       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:49:37.624039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:49:37.624063       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:49:37.624068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:49:37.724863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:49:37.724889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:49:37.724902       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 08:49:37.724927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="nodes \"functional-580781\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:49:57.362283       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:49:57.362240       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8] <==
	E0929 08:48:48.629336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:48.629343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:48.629190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:48:48.629108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:48.629583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:48.629604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.481623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 08:48:49.529255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 08:48:49.592487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:49.604559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:49.694389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:48:49.697302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:48:49.731820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:48:49.745001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:48:49.759498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:49.789574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.801523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:49.827015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 08:48:50.224669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 08:49:53.820584       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820641       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 08:49:53.820662       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 08:49:53.820691       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 08:49:53.820718       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [346cf15effa5119adbb50a15e72686cb099db1666fa69bfc2a68c8fe414f1503] <==
	I0929 08:49:56.473573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 08:49:57.340173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 08:49:57.340209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 08:49:57.340222       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 08:49:57.340232       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 08:49:57.364559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 08:49:57.364580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:57.366868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.366910       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.367205       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 08:49:57.367245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 08:49:57.467721       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:05:27 functional-580781 kubelet[5417]: E0929 09:05:27.438996    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="29b039fe-6e18-4585-9490-7bba9fa796cf"
	Sep 29 09:05:30 functional-580781 kubelet[5417]: E0929 09:05:30.438430    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vt9lx" podUID="f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178"
	Sep 29 09:05:31 functional-580781 kubelet[5417]: E0929 09:05:31.438535    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:05:34 functional-580781 kubelet[5417]: E0929 09:05:34.106155    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 09:05:34 functional-580781 kubelet[5417]: E0929 09:05:34.106211    5417 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 09:05:34 functional-580781 kubelet[5417]: E0929 09:05:34.106427    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(fef4d926-fc98-4617-80db-05dd451129c3): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 09:05:34 functional-580781 kubelet[5417]: E0929 09:05:34.106488    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 09:05:34 functional-580781 kubelet[5417]: E0929 09:05:34.437275    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-thgc5" podUID="fa8e859c-e2eb-4366-bf33-3fbbc9df80d6"
	Sep 29 09:05:35 functional-580781 kubelet[5417]: E0929 09:05:35.578668    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136735578437179  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:35 functional-580781 kubelet[5417]: E0929 09:05:35.578704    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136735578437179  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:40 functional-580781 kubelet[5417]: E0929 09:05:40.437809    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 09:05:42 functional-580781 kubelet[5417]: E0929 09:05:42.438817    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vt9lx" podUID="f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178"
	Sep 29 09:05:42 functional-580781 kubelet[5417]: E0929 09:05:42.438895    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:05:42 functional-580781 kubelet[5417]: E0929 09:05:42.438903    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="29b039fe-6e18-4585-9490-7bba9fa796cf"
	Sep 29 09:05:45 functional-580781 kubelet[5417]: E0929 09:05:45.580088    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136745579873269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:45 functional-580781 kubelet[5417]: E0929 09:05:45.580130    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136745579873269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:46 functional-580781 kubelet[5417]: E0929 09:05:46.438186    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 09:05:49 functional-580781 kubelet[5417]: E0929 09:05:49.437425    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-thgc5" podUID="fa8e859c-e2eb-4366-bf33-3fbbc9df80d6"
	Sep 29 09:05:54 functional-580781 kubelet[5417]: E0929 09:05:54.438100    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 09:05:54 functional-580781 kubelet[5417]: E0929 09:05:54.439055    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:05:54 functional-580781 kubelet[5417]: E0929 09:05:54.439168    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vt9lx" podUID="f1ae0b1e-3f0a-4ea9-8226-53ff2ab3b178"
	Sep 29 09:05:55 functional-580781 kubelet[5417]: E0929 09:05:55.581735    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136755581493990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:55 functional-580781 kubelet[5417]: E0929 09:05:55.581779    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136755581493990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 29 09:05:57 functional-580781 kubelet[5417]: E0929 09:05:57.438410    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 09:05:57 functional-580781 kubelet[5417]: E0929 09:05:57.439098    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="29b039fe-6e18-4585-9490-7bba9fa796cf"
	
	
	==> storage-provisioner [3ba534cc9995fbd82b83b955735dab9de1c54de1d8fd7119eccb782d77fe63fd] <==
	W0929 09:05:34.592702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:36.594944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:36.598572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:38.601628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:38.605375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:40.609266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:40.614358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:42.617369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:42.621166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:44.624408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:44.628550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:46.631701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:46.635527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:48.638321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:48.642239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:50.645539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:50.651095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:52.654624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:52.658887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:54.662024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:54.665959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:56.669453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:56.673400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:58.678374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:05:58.684021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae] <==
	W0929 08:49:10.392985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.396758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.400952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.404972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.410240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.414556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.418891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.422630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.426730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.430506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.434896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.437786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.441661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.444570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.448383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.451410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.456736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.460215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.464644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.467426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.475151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.478302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.482203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.485888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.489874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
helpers_test.go:269: (dbg) Run:  kubectl --context functional-580781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1 (110.546516ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://db229b500cea2a9d934455d2b9a59a2e28deb77a8bbc7c217b4b73c4b22b9246
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 08:52:24 +0000
	      Finished:     Mon, 29 Sep 2025 08:52:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgs2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qgs2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15m   default-scheduler  Successfully assigned default/busybox-mount to functional-580781
	  Normal  Pulling    15m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.114s (1m53.786s including waiting). Image size: 4631262 bytes.
	  Normal  Created    13m   kubelet            Created container: mount-munger
	  Normal  Started    13m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rxhk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:52:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j626 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8j626:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rxhk2 to functional-580781
	  Normal   Pulling    6m7s (x5 over 13m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x5 over 12m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     4m48s (x5 over 12m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m28s (x22 over 12m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m28s (x22 over 12m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-thgc5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5cvn7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
	  Normal   Pulling    2m14s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     88s (x5 over 9m10s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     88s (x5 over 9m10s)   kubelet            Error: ErrImagePull
	  Warning  Failed     26s (x16 over 9m10s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x17 over 9m10s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-g7nlv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:19 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnqlc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pnqlc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-g7nlv to functional-580781
	  Warning  Failed     14m                   kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m18s (x5 over 15m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m35s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m35s (x4 over 13m)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m33s (x21 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    29s (x33 over 14m)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpvw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpvw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/nginx-svc to functional-580781
	  Warning  Failed     12m                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m47s (x5 over 15m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m19s (x4 over 14m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m19s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m48s (x17 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    33s (x26 over 14m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpg5f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kpg5f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/sp-pod to functional-580781
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m7s (x5 over 15m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m18s (x4 over 13m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m18s (x5 over 13m)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m2s (x17 over 13m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    14s (x24 over 13m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m95gr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vt9lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9404f44a-2a63-4a35-abd5-64f6a3e4fb2d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003521998s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-580781 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-580781 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-580781 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-580781 apply -f testdata/storage-provisioner/pod.yaml
I0929 08:50:27.035938  386225 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fef4d926-fc98-4617-80db-05dd451129c3] Pending
helpers_test.go:352: "sp-pod" [fef4d926-fc98-4617-80db-05dd451129c3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 08:56:27.344986702 +0000 UTC m=+1634.991612159
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-580781 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-580781 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:50:27 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpg5f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-kpg5f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-580781
Warning  Failed     2m25s (x2 over 4m4s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m (x3 over 6m)       kubelet            Pulling image "docker.io/nginx"
Warning  Failed     39s (x3 over 4m4s)    kubelet            Error: ErrImagePull
Warning  Failed     39s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x5 over 4m4s)     kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2s (x5 over 4m4s)     kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-580781 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-580781 logs sp-pod -n default: exit status 1 (68.600383ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-580781 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-580781
helpers_test.go:243: (dbg) docker inspect functional-580781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	        "Created": "2025-09-29T08:48:33.034529223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:48:33.070958392Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hosts",
	        "LogPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0-json.log",
	        "Name": "/functional-580781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-580781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-580781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	                "LowerDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-580781",
	                "Source": "/var/lib/docker/volumes/functional-580781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-580781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-580781",
	                "name.minikube.sigs.k8s.io": "functional-580781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b37cbd8f035d18de42849ede2340b295b85fe84979fff6ab1cec7b19304cded",
	            "SandboxKey": "/var/run/docker/netns/5b37cbd8f035",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-580781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:69:98:c1:90:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "495c1eb850caf76b3c694e019686a6cae7865db2cadf61ef3a9e798cb0bdad99",
	                    "EndpointID": "8c180be2c2eda60e41070ee44e33e49d42b76851992a3e20cd0612627b94aff0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-580781",
	                        "38862aa7a2bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-580781 -n functional-580781
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs -n 25: (1.420784927s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-580781 image ls                                                                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ image   │ functional-580781 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ image   │ functional-580781 image ls                                                                                                        │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ image   │ functional-580781 image save --daemon kicbase/echo-server:functional-580781 --alsologtostderr                                     │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ mount   │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdany-port2091007709/001:/mount-9p --alsologtostderr -v=1                   │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh     │ functional-580781 ssh -- ls -la /mount-9p                                                                                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh     │ functional-580781 ssh cat /mount-9p/test-1759135828609145608                                                                      │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh     │ functional-580781 ssh stat /mount-9p/created-by-test                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh stat /mount-9p/created-by-pod                                                                               │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh sudo umount -f /mount-9p                                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount   │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdspecific-port2497644652/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh -- ls -la /mount-9p                                                                                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh sudo umount -f /mount-9p                                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount   │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount2 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount1                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount   │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount   │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh     │ functional-580781 ssh findmnt -T /mount1                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh findmnt -T /mount2                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh     │ functional-580781 ssh findmnt -T /mount3                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount   │ -p functional-580781 --kill=true                                                                                                  │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:49:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:49:25.489752  431849 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:49:25.489872  431849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:49:25.489877  431849 out.go:374] Setting ErrFile to fd 2...
	I0929 08:49:25.489881  431849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:49:25.490131  431849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:49:25.490611  431849 out.go:368] Setting JSON to false
	I0929 08:49:25.491769  431849 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9114,"bootTime":1759126651,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:49:25.491901  431849 start.go:140] virtualization: kvm guest
	I0929 08:49:25.494497  431849 out.go:179] * [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:49:25.495810  431849 notify.go:220] Checking for updates...
	I0929 08:49:25.495861  431849 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:49:25.497214  431849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:49:25.498548  431849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:49:25.499939  431849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:49:25.501452  431849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:49:25.502852  431849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:49:25.504693  431849 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:49:25.504794  431849 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:49:25.529819  431849 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:49:25.529914  431849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:49:25.585461  431849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-29 08:49:25.575184547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:49:25.585613  431849 docker.go:318] overlay module found
	I0929 08:49:25.587632  431849 out.go:179] * Using the docker driver based on existing profile
	I0929 08:49:25.588909  431849 start.go:304] selected driver: docker
	I0929 08:49:25.588917  431849 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:49:25.589012  431849 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:49:25.589100  431849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:49:25.645444  431849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-29 08:49:25.634317803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:49:25.646097  431849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:49:25.646126  431849 cni.go:84] Creating CNI manager for ""
	I0929 08:49:25.646186  431849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:49:25.646233  431849 start.go:348] cluster config:
	{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:49:25.647939  431849 out.go:179] * Starting "functional-580781" primary control-plane node in "functional-580781" cluster
	I0929 08:49:25.649262  431849 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:49:25.650559  431849 out.go:179] * Pulling base image v0.0.48 ...
	I0929 08:49:25.651777  431849 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:49:25.651814  431849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 08:49:25.651825  431849 cache.go:58] Caching tarball of preloaded images
	I0929 08:49:25.651882  431849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:49:25.651961  431849 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 08:49:25.652036  431849 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 08:49:25.652202  431849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/config.json ...
	I0929 08:49:25.672694  431849 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 08:49:25.672705  431849 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 08:49:25.672734  431849 cache.go:232] Successfully downloaded all kic artifacts
	I0929 08:49:25.672763  431849 start.go:360] acquireMachinesLock for functional-580781: {Name:mk27a70008b25241e57f53dc9107d91cacfebecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 08:49:25.672824  431849 start.go:364] duration metric: took 44.972µs to acquireMachinesLock for "functional-580781"
	I0929 08:49:25.672859  431849 start.go:96] Skipping create...Using existing machine configuration
	I0929 08:49:25.672865  431849 fix.go:54] fixHost starting: 
	I0929 08:49:25.673170  431849 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
	I0929 08:49:25.690986  431849 fix.go:112] recreateIfNeeded on functional-580781: state=Running err=<nil>
	W0929 08:49:25.691030  431849 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 08:49:25.693138  431849 out.go:252] * Updating the running docker "functional-580781" container ...
	I0929 08:49:25.693177  431849 machine.go:93] provisionDockerMachine start ...
	I0929 08:49:25.693249  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:25.711146  431849 main.go:141] libmachine: Using SSH client type: native
	I0929 08:49:25.711402  431849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0929 08:49:25.711410  431849 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 08:49:25.846962  431849 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-580781
	
	I0929 08:49:25.846987  431849 ubuntu.go:182] provisioning hostname "functional-580781"
	I0929 08:49:25.847078  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:25.865641  431849 main.go:141] libmachine: Using SSH client type: native
	I0929 08:49:25.865881  431849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0929 08:49:25.865889  431849 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-580781 && echo "functional-580781" | sudo tee /etc/hostname
	I0929 08:49:26.014632  431849 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-580781
	
	I0929 08:49:26.014696  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:26.033668  431849 main.go:141] libmachine: Using SSH client type: native
	I0929 08:49:26.033912  431849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0929 08:49:26.033925  431849 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-580781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-580781/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-580781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 08:49:26.170615  431849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 08:49:26.170637  431849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 08:49:26.170684  431849 ubuntu.go:190] setting up certificates
	I0929 08:49:26.170697  431849 provision.go:84] configureAuth start
	I0929 08:49:26.170756  431849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-580781
	I0929 08:49:26.188984  431849 provision.go:143] copyHostCerts
	I0929 08:49:26.189053  431849 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 08:49:26.189067  431849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 08:49:26.189130  431849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 08:49:26.189244  431849 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 08:49:26.189249  431849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 08:49:26.189280  431849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 08:49:26.189419  431849 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 08:49:26.189424  431849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 08:49:26.189450  431849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 08:49:26.189509  431849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.functional-580781 san=[127.0.0.1 192.168.49.2 functional-580781 localhost minikube]
	I0929 08:49:26.649881  431849 provision.go:177] copyRemoteCerts
	I0929 08:49:26.649932  431849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 08:49:26.649982  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:26.668214  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:26.764912  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 08:49:26.790449  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 08:49:26.816357  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 08:49:26.841534  431849 provision.go:87] duration metric: took 670.823145ms to configureAuth
	I0929 08:49:26.841555  431849 ubuntu.go:206] setting minikube options for container-runtime
	I0929 08:49:26.841728  431849 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:49:26.841821  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:26.859700  431849 main.go:141] libmachine: Using SSH client type: native
	I0929 08:49:26.859974  431849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I0929 08:49:26.859986  431849 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 08:49:27.239123  431849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 08:49:27.239146  431849 machine.go:96] duration metric: took 1.545961768s to provisionDockerMachine
	I0929 08:49:27.239158  431849 start.go:293] postStartSetup for "functional-580781" (driver="docker")
	I0929 08:49:27.239167  431849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 08:49:27.239219  431849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 08:49:27.239273  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:27.257694  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:27.356015  431849 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 08:49:27.359376  431849 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 08:49:27.359394  431849 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 08:49:27.359400  431849 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 08:49:27.359406  431849 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 08:49:27.359415  431849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 08:49:27.359469  431849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 08:49:27.359532  431849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 08:49:27.359596  431849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/test/nested/copy/386225/hosts -> hosts in /etc/test/nested/copy/386225
	I0929 08:49:27.359630  431849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/386225
	I0929 08:49:27.368491  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 08:49:27.394035  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/test/nested/copy/386225/hosts --> /etc/test/nested/copy/386225/hosts (40 bytes)
	I0929 08:49:27.419345  431849 start.go:296] duration metric: took 180.169965ms for postStartSetup
	I0929 08:49:27.419422  431849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 08:49:27.419456  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:27.437920  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:27.531408  431849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 08:49:27.536079  431849 fix.go:56] duration metric: took 1.863204634s for fixHost
	I0929 08:49:27.536099  431849 start.go:83] releasing machines lock for "functional-580781", held for 1.863267149s
	I0929 08:49:27.536171  431849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-580781
	I0929 08:49:27.555787  431849 ssh_runner.go:195] Run: cat /version.json
	I0929 08:49:27.555824  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:27.555888  431849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 08:49:27.555963  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:27.576395  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:27.576554  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:27.742862  431849 ssh_runner.go:195] Run: systemctl --version
	I0929 08:49:27.747660  431849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 08:49:27.890024  431849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 08:49:27.895070  431849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:49:27.905001  431849 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 08:49:27.905098  431849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 08:49:27.914390  431849 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 08:49:27.914406  431849 start.go:495] detecting cgroup driver to use...
	I0929 08:49:27.914435  431849 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 08:49:27.914486  431849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 08:49:27.927603  431849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 08:49:27.939678  431849 docker.go:218] disabling cri-docker service (if available) ...
	I0929 08:49:27.939728  431849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 08:49:27.953666  431849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 08:49:27.966236  431849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 08:49:28.076420  431849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 08:49:28.184899  431849 docker.go:234] disabling docker service ...
	I0929 08:49:28.184954  431849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 08:49:28.198045  431849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 08:49:28.211175  431849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 08:49:28.321730  431849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 08:49:28.434599  431849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 08:49:28.447547  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 08:49:28.466986  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:28.605674  431849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 08:49:28.605728  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.617126  431849 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 08:49:28.617184  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.628120  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.638792  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.649452  431849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 08:49:28.659460  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.670136  431849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.680259  431849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 08:49:28.691237  431849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 08:49:28.700170  431849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 08:49:28.709289  431849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:49:28.817415  431849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 08:49:29.047273  431849 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 08:49:29.047330  431849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 08:49:29.051233  431849 start.go:563] Will wait 60s for crictl version
	I0929 08:49:29.051278  431849 ssh_runner.go:195] Run: which crictl
	I0929 08:49:29.054789  431849 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 08:49:29.088802  431849 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 08:49:29.088885  431849 ssh_runner.go:195] Run: crio --version
	I0929 08:49:29.125057  431849 ssh_runner.go:195] Run: crio --version
	I0929 08:49:29.162974  431849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	I0929 08:49:29.164192  431849 cli_runner.go:164] Run: docker network inspect functional-580781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 08:49:29.182663  431849 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 08:49:29.188565  431849 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0929 08:49:29.189777  431849 kubeadm.go:875] updating cluster {Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 08:49:29.190010  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:29.339007  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:29.477877  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:29.618214  431849 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 08:49:29.618363  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:29.782370  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:29.924234  431849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 08:49:30.059617  431849 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:49:30.101741  431849 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:49:30.101753  431849 crio.go:433] Images already preloaded, skipping extraction
	I0929 08:49:30.101794  431849 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 08:49:30.136549  431849 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 08:49:30.136563  431849 cache_images.go:85] Images are preloaded, skipping loading
	I0929 08:49:30.136570  431849 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I0929 08:49:30.136665  431849 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-580781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 08:49:30.136724  431849 ssh_runner.go:195] Run: crio config
	I0929 08:49:30.180071  431849 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0929 08:49:30.180124  431849 cni.go:84] Creating CNI manager for ""
	I0929 08:49:30.180135  431849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:49:30.180144  431849 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 08:49:30.180169  431849 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-580781 NodeName:functional-580781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 08:49:30.180304  431849 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-580781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 08:49:30.180359  431849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 08:49:30.190786  431849 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 08:49:30.190860  431849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 08:49:30.200289  431849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0929 08:49:30.220307  431849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 08:49:30.239302  431849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I0929 08:49:30.258037  431849 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 08:49:30.262146  431849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:49:30.368898  431849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:49:30.381161  431849 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781 for IP: 192.168.49.2
	I0929 08:49:30.381176  431849 certs.go:194] generating shared ca certs ...
	I0929 08:49:30.381198  431849 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:49:30.381352  431849 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 08:49:30.381402  431849 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 08:49:30.381410  431849 certs.go:256] generating profile certs ...
	I0929 08:49:30.381513  431849 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.key
	I0929 08:49:30.381562  431849 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/apiserver.key.458f1734
	I0929 08:49:30.381605  431849 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/proxy-client.key
	I0929 08:49:30.381738  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 08:49:30.381771  431849 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 08:49:30.381780  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 08:49:30.381805  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 08:49:30.381852  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 08:49:30.381884  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 08:49:30.381934  431849 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 08:49:30.382898  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 08:49:30.409113  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 08:49:30.434895  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 08:49:30.460174  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 08:49:30.486923  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 08:49:30.511700  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 08:49:30.536757  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 08:49:30.562707  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 08:49:30.587633  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 08:49:30.613179  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 08:49:30.638084  431849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 08:49:30.663577  431849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 08:49:30.682228  431849 ssh_runner.go:195] Run: openssl version
	I0929 08:49:30.687852  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 08:49:30.698069  431849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 08:49:30.701871  431849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 08:49:30.701923  431849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 08:49:30.708997  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 08:49:30.718658  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 08:49:30.728796  431849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 08:49:30.732721  431849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 08:49:30.732778  431849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 08:49:30.739871  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 08:49:30.748970  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 08:49:30.758847  431849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:49:30.762360  431849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:49:30.762409  431849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 08:49:30.769321  431849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 08:49:30.778899  431849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 08:49:30.782697  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 08:49:30.789593  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 08:49:30.796640  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 08:49:30.803520  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 08:49:30.810291  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 08:49:30.817153  431849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 08:49:30.823827  431849 kubeadm.go:392] StartCluster: {Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:49:30.823930  431849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 08:49:30.823996  431849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:49:30.860454  431849 cri.go:89] found id: "49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33"
	I0929 08:49:30.860466  431849 cri.go:89] found id: "8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae"
	I0929 08:49:30.860470  431849 cri.go:89] found id: "1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0"
	I0929 08:49:30.860472  431849 cri.go:89] found id: "3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510"
	I0929 08:49:30.860474  431849 cri.go:89] found id: "ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174"
	I0929 08:49:30.860477  431849 cri.go:89] found id: "83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4"
	I0929 08:49:30.860478  431849 cri.go:89] found id: "1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e"
	I0929 08:49:30.860480  431849 cri.go:89] found id: "31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8"
	I0929 08:49:30.860482  431849 cri.go:89] found id: ""
	I0929 08:49:30.860519  431849 ssh_runner.go:195] Run: sudo runc list -f json
	I0929 08:49:30.883079  431849 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e","pid":1432,"status":"running","bundle":"/run/containers/storage/overlay-containers/1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e/userdata","rootfs":"/var/lib/containers/storage/overlay/13414b5d171149f71886200f60bb1a96be4dfe1e3f20733b1085a4212cf5d2bf/merged","created":"2025-09-29T08:48:46.817739334Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d0cc63c7","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d0cc63c7\",\"io.kubernetes.container.ports\
":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:46.753886787Z","io.kubernetes.cri-o.Image":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri-o.ImageRef":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-580781\",\"io.kubernetes.pod.namespace\":\"kube-s
ystem\",\"io.kubernetes.pod.uid\":\"6da0c54cd6a73fe4c271075eff51ccae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-580781_6da0c54cd6a73fe4c271075eff51ccae/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/13414b5d171149f71886200f60bb1a96be4dfe1e3f20733b1085a4212cf5d2bf/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-580781_kube-system_6da0c54cd6a73fe4c271075eff51ccae_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/34eb7cdf5ae3a9317a9521eac94a5c9d66d86f1ac859fee44bb6e82c5a3a6318/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"34eb7cdf5ae3a9317a9521eac94a5c9d66d86f1ac859fee44bb6e82c5a3a6318","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-580781_kube-system_6da0c54cd6a73fe4c271075eff51ccae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kuber
netes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6da0c54cd6a73fe4c271075eff51ccae/containers/kube-apiserver/dd4b6135\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6da0c54cd6a73fe4c271075eff51ccae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs
\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-580781","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6da0c54cd6a73fe4c271075eff51ccae","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"6da0c54cd6a73fe4c271075eff51ccae","kubernetes.io/config.seen":"2025-09-29T08:48:46.256168911Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0"
,"pid":1899,"status":"running","bundle":"/run/containers/storage/overlay-containers/1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0/userdata","rootfs":"/var/lib/containers/storage/overlay/0325cd5d04eb92c403b6eccc9cb7d7c61c74031c403426fde1d046c91f616803/merged","created":"2025-09-29T08:48:57.264859448Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1bfc7f0b08c9ebcb2de9450041b131d889b2c233a
415db8de378bc8114a859d0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:57.223618039Z","io.kubernetes.cri-o.Image":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-pnn6t\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c1fe1e44-adab-40da-af6f-88ef5240ddcb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-pnn6t_c1fe1e44-adab-40da-af6f-88ef5240ddcb/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0325cd5d04eb92c403b6eccc9cb7d7c61c74031c403426fde1d046c91f616803/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni
_kindnet-pnn6t_kube-system_c1fe1e44-adab-40da-af6f-88ef5240ddcb_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c865c04855dee9272af8eb903bc0cf77f45d74520f67dcfcb32e464fabc42895/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c865c04855dee9272af8eb903bc0cf77f45d74520f67dcfcb32e464fabc42895","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-pnn6t_kube-system_c1fe1e44-adab-40da-af6f-88ef5240ddcb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c1fe1e44-adab-40da-af6f-88ef5240ddcb/etc-hosts\",\"readon
ly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c1fe1e44-adab-40da-af6f-88ef5240ddcb/containers/kindnet-cni/1c97125b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c1fe1e44-adab-40da-af6f-88ef5240ddcb/volumes/kubernetes.io~projected/kube-api-access-dpkfs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-pnn6t","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c1fe1e44-adab-40da-af6f-88ef5240ddcb","kubernetes.io/config.seen":"2025-09-29T08:48:56.862984148Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.sy
stemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8","pid":1441,"status":"running","bundle":"/run/containers/storage/overlay-containers/31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8/userdata","rootfs":"/var/lib/containers/storage/overlay/449788c21441d4c014a179def5aac4105a48f5c404a13170f68c267b465642d0/merged","created":"2025-09-29T08:48:46.81953669Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"af42bbeb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"Fil
e","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"af42bbeb\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:46.736613197Z","io.kubernetes.cri-o.Image":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri-o.ImageRef":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"ku
be-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-580781\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9ad7a232a4b7d990583e3839fcf8db2a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-580781_9ad7a232a4b7d990583e3839fcf8db2a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/449788c21441d4c014a179def5aac4105a48f5c404a13170f68c267b465642d0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-580781_kube-system_9ad7a232a4b7d990583e3839fcf8db2a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/56f4894c02564446fbcb397e3f72c9691f4f3a6d84fc7139bdedb4e77ec15fac/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"56f4894c02564446fbcb397e3f72c9691f4f3a6d84fc7139bdedb4e77ec15fac","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-580781_kube-system_9ad7a2
32a4b7d990583e3839fcf8db2a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9ad7a232a4b7d990583e3839fcf8db2a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9ad7a232a4b7d990583e3839fcf8db2a/containers/kube-scheduler/bc60ef2a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-580781","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9ad7a232a4b7d990583e3839fcf8db2a","kubernetes.io/config.hash":"
9ad7a232a4b7d990583e3839fcf8db2a","kubernetes.io/config.seen":"2025-09-29T08:48:46.256163378Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510","pid":1907,"status":"running","bundle":"/run/containers/storage/overlay-containers/3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510/userdata","rootfs":"/var/lib/containers/storage/overlay/dc95d409f03c9f31af10146fb4df0295977238ca0c8880cbfbbf248940dfe640/merged","created":"2025-09-29T08:48:57.267455508Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"96651ac1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-lo
g","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"96651ac1\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:57.215023114Z","io.kubernetes.cri-o.Image":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri-o.ImageRef":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-7zlkp\",\"io.kubernetes.pod.namespace\":\"kube-sy
stem\",\"io.kubernetes.pod.uid\":\"63373f56-af01-4ce0-83c7-57300f541f3f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-7zlkp_63373f56-af01-4ce0-83c7-57300f541f3f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dc95d409f03c9f31af10146fb4df0295977238ca0c8880cbfbbf248940dfe640/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-7zlkp_kube-system_63373f56-af01-4ce0-83c7-57300f541f3f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/630401fd11ff48de5b6b4ec5a1d53e93c48f5b4b3fd8eb8c142302b3cce0be42/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"630401fd11ff48de5b6b4ec5a1d53e93c48f5b4b3fd8eb8c142302b3cce0be42","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-7zlkp_kube-system_63373f56-af01-4ce0-83c7-57300f541f3f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes
.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/63373f56-af01-4ce0-83c7-57300f541f3f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/63373f56-af01-4ce0-83c7-57300f541f3f/containers/kube-proxy/3d6ab087\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/63373f56-af01-4ce0-83c7-57300f541f3f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\"
,\"host_path\":\"/var/lib/kubelet/pods/63373f56-af01-4ce0-83c7-57300f541f3f/volumes/kubernetes.io~projected/kube-api-access-pht2g\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-7zlkp","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"63373f56-af01-4ce0-83c7-57300f541f3f","kubernetes.io/config.seen":"2025-09-29T08:48:56.861531645Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33","pid":2210,"status":"running","bundle":"/run/containers/storage/overlay-containers/49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33/userdata","rootfs":"/var/lib/containers/storage/overlay
/84befe6af1ef20341573c9cabb27a6278018ab9ec634bb6e5dd56d470cd23482/merged","created":"2025-09-29T08:49:08.355528001Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\"
,\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:49:08.309458524Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef
":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-qn4f9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5ebe4871-c870-4dc3-b427-56b94b73b2b7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-qn4f9_5ebe4871-c870-4dc3-b427-56b94b73b2b7/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/84befe6af1ef20341573c9cabb27a6278018ab9ec634bb6e5dd56d470cd23482/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-qn4f9_kube-system_5ebe4871-c870-4dc3-b427-56b94b73b2b7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6fa5626cbca36a3100debe63333586f3408e42e9f3de0af61e43cfd5c9c6ca05/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6fa5626cbca36a3100debe63333586f3408e42e9f3de0af61e43cfd5c9c
6ca05","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-qn4f9_kube-system_5ebe4871-c870-4dc3-b427-56b94b73b2b7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/5ebe4871-c870-4dc3-b427-56b94b73b2b7/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5ebe4871-c870-4dc3-b427-56b94b73b2b7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5ebe4871-c870-4dc3-b427-56b94b73b2b7/containers/coredns/951810d8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/l
ib/kubelet/pods/5ebe4871-c870-4dc3-b427-56b94b73b2b7/volumes/kubernetes.io~projected/kube-api-access-qgbc9\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-qn4f9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5ebe4871-c870-4dc3-b427-56b94b73b2b7","kubernetes.io/config.seen":"2025-09-29T08:49:07.947632076Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4","pid":1443,"status":"running","bundle":"/run/containers/storage/overlay-containers/83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4/userdata","rootfs":"/var/lib/containers/storage/overlay/7117b556a54d0e
0eec8ae8f33610dec66665b24ab58aab6c2f22b4004a0e0c7a/merged","created":"2025-09-29T08:48:46.821188465Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.Conta
inerID":"83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:46.754740119Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-580781\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b7e8085279b73468c7c9ebe8de57600f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-580781_b7e8085279b73468c7c9ebe8de57600f/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7117b556a54d0e0eec8ae8f33610dec66665b24ab58aab6c2f22b4004a0e0c7a/merged","io.kubernetes.cri-o.
Name":"k8s_etcd_etcd-functional-580781_kube-system_b7e8085279b73468c7c9ebe8de57600f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0823e3669f0610082bec511d4901065f8359f1adb89c51b13bcbe597c8b4ebda/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0823e3669f0610082bec511d4901065f8359f1adb89c51b13bcbe597c8b4ebda","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-580781_kube-system_b7e8085279b73468c7c9ebe8de57600f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b7e8085279b73468c7c9ebe8de57600f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b7e8085279b73468c7c9ebe8de57600f/containers/etcd/74f77836\",\"readonly\":false,\"propagation\":0,\"selinux_relabe
l\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-580781","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b7e8085279b73468c7c9ebe8de57600f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b7e8085279b73468c7c9ebe8de57600f","kubernetes.io/config.seen":"2025-09-29T08:48:46.256167511Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae","pid":2184,"status":"running","bundle":"/run/containers/storage/overlay-containers/8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae/userdata","rootfs":"/var/lib/containers/storage/overlay/2f48ac234c052a2049755b67748c8fdddba8a96d31ab562d81a1463e22e38128/merged","created":"2025-09-29T08:49:08.329258979Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30
\"}","io.kubernetes.cri-o.ContainerID":"8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:49:08.284768324Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9404f44a-2a63-4a35-abd5-64f6a3e4fb2d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9404f44a-2a63-4a35-abd5-64f6a3e4fb2d/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2f
48ac234c052a2049755b67748c8fdddba8a96d31ab562d81a1463e22e38128/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_9404f44a-2a63-4a35-abd5-64f6a3e4fb2d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/572ac443fe212e5643b1ac6fdafe46a0da631632a6b9cfd0a2b999fbb8e164a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"572ac443fe212e5643b1ac6fdafe46a0da631632a6b9cfd0a2b999fbb8e164a4","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_9404f44a-2a63-4a35-abd5-64f6a3e4fb2d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9404f44a-2a63-4a35-abd5-64f6a3e4fb2d/etc-hosts\",\"readonly\":false,\"propagatio
n\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9404f44a-2a63-4a35-abd5-64f6a3e4fb2d/containers/storage-provisioner/2f42e6d8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9404f44a-2a63-4a35-abd5-64f6a3e4fb2d/volumes/kubernetes.io~projected/kube-api-access-c8s7v\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9404f44a-2a63-4a35-abd5-64f6a3e4fb2d","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},
\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-29T08:49:07.946254133Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174","pid":1456,"status":"running","bundle":"/run/containers/storage/overlay-containers/ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174/userdata","rootfs":"/var/lib/cont
ainers/storage/overlay/9c04c7a40ac58514cf9bebc82f0db3f0c6bb66ad0562f390067dc6b988c36057/merged","created":"2025-09-29T08:48:46.826222705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c112505","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c112505\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.
terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T08:48:46.765677383Z","io.kubernetes.cri-o.Image":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri-o.ImageRef":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-580781\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"704a8f57072a015861ebe37c304a623c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-580781_704a8f57072a015861ebe37c304a623c/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-ma
nager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9c04c7a40ac58514cf9bebc82f0db3f0c6bb66ad0562f390067dc6b988c36057/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-580781_kube-system_704a8f57072a015861ebe37c304a623c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/454b7ed6d8fc6361bbcba9de8cbc481d22e6fd7df1655c5f8fe8c9ce81c79bcb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"454b7ed6d8fc6361bbcba9de8cbc481d22e6fd7df1655c5f8fe8c9ce81c79bcb","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-580781_kube-system_704a8f57072a015861ebe37c304a623c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},
{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/704a8f57072a015861ebe37c304a623c/containers/kube-controller-manager/d90a1ba8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/704a8f57072a015861ebe37c304a623c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_r
elabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-580781","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"704a8f57072a015861ebe37c304a623c","kubernetes.io/config.hash":"704a8f57072a015861ebe37c304a623c","kubernetes.io/config.seen":"2025-09-29T08:48:46.256170213Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0929 08:49:30.883509  431849 cri.go:126] list returned 8 containers
	I0929 08:49:30.883518  431849 cri.go:129] container: {ID:1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e Status:running}
	I0929 08:49:30.883532  431849 cri.go:135] skipping {1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e running}: state = "running", want "paused"
	I0929 08:49:30.883539  431849 cri.go:129] container: {ID:1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0 Status:running}
	I0929 08:49:30.883543  431849 cri.go:135] skipping {1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0 running}: state = "running", want "paused"
	I0929 08:49:30.883545  431849 cri.go:129] container: {ID:31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8 Status:running}
	I0929 08:49:30.883547  431849 cri.go:135] skipping {31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8 running}: state = "running", want "paused"
	I0929 08:49:30.883549  431849 cri.go:129] container: {ID:3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510 Status:running}
	I0929 08:49:30.883551  431849 cri.go:135] skipping {3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510 running}: state = "running", want "paused"
	I0929 08:49:30.883554  431849 cri.go:129] container: {ID:49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33 Status:running}
	I0929 08:49:30.883558  431849 cri.go:135] skipping {49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33 running}: state = "running", want "paused"
	I0929 08:49:30.883561  431849 cri.go:129] container: {ID:83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4 Status:running}
	I0929 08:49:30.883563  431849 cri.go:135] skipping {83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4 running}: state = "running", want "paused"
	I0929 08:49:30.883566  431849 cri.go:129] container: {ID:8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae Status:running}
	I0929 08:49:30.883568  431849 cri.go:135] skipping {8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae running}: state = "running", want "paused"
	I0929 08:49:30.883572  431849 cri.go:129] container: {ID:ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174 Status:running}
	I0929 08:49:30.883574  431849 cri.go:135] skipping {ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174 running}: state = "running", want "paused"
	I0929 08:49:30.883619  431849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 08:49:30.893893  431849 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 08:49:30.893905  431849 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 08:49:30.893947  431849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 08:49:30.903151  431849 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 08:49:30.903644  431849 kubeconfig.go:125] found "functional-580781" server: "https://192.168.49.2:8441"
	I0929 08:49:30.904807  431849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 08:49:30.914399  431849 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-29 08:48:42.318979567 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-29 08:49:30.255595960 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0929 08:49:30.914407  431849 kubeadm.go:1152] stopping kube-system containers ...
	I0929 08:49:30.914420  431849 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 08:49:30.914467  431849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 08:49:30.952212  431849 cri.go:89] found id: "49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33"
	I0929 08:49:30.952225  431849 cri.go:89] found id: "8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae"
	I0929 08:49:30.952228  431849 cri.go:89] found id: "1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0"
	I0929 08:49:30.952230  431849 cri.go:89] found id: "3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510"
	I0929 08:49:30.952232  431849 cri.go:89] found id: "ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174"
	I0929 08:49:30.952234  431849 cri.go:89] found id: "83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4"
	I0929 08:49:30.952235  431849 cri.go:89] found id: "1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e"
	I0929 08:49:30.952237  431849 cri.go:89] found id: "31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8"
	I0929 08:49:30.952239  431849 cri.go:89] found id: ""
	I0929 08:49:30.952243  431849 cri.go:252] Stopping containers: [49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33 8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae 1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0 3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510 ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174 83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4 1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e 31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8]
	I0929 08:49:30.952299  431849 ssh_runner.go:195] Run: which crictl
	I0929 08:49:30.956103  431849 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33 8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae 1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0 3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510 ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174 83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4 1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e 31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8
	I0929 08:49:53.956732  431849 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33 8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae 1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0 3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510 ea587e65db657aa8426b48de4a514cc06ee9682de69f373b625ecaeb016e9174 83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4 1096ad61528e3321131814ec88ace2fa301f202bb31dfc3364ed1aab9445b86e 31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8: (23.000598258s)
	I0929 08:49:53.956796  431849 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 08:49:53.999014  431849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 08:49:54.008745  431849 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 29 08:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Sep 29 08:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 29 08:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Sep 29 08:48 /etc/kubernetes/scheduler.conf
	
	I0929 08:49:54.008799  431849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0929 08:49:54.018252  431849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0929 08:49:54.027860  431849 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 08:49:54.027913  431849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 08:49:54.037280  431849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0929 08:49:54.046534  431849 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 08:49:54.046592  431849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 08:49:54.055663  431849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0929 08:49:54.065081  431849 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 08:49:54.065127  431849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 08:49:54.073977  431849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 08:49:54.083443  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:54.127201  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:55.165647  431849 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.038422127s)
	I0929 08:49:55.165667  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:55.347521  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:55.396901  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:55.453652  431849 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:49:55.453713  431849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:49:55.954763  431849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:49:56.453904  431849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:49:56.468758  431849 api_server.go:72] duration metric: took 1.015103904s to wait for apiserver process to appear ...
	I0929 08:49:56.468777  431849 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:49:56.468800  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:57.327724  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 08:49:57.327743  431849 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 08:49:57.327758  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:57.349705  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 08:49:57.349724  431849 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 08:49:57.468932  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:57.474168  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 08:49:57.474190  431849 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 08:49:57.969879  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:57.974133  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 08:49:57.974155  431849 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 08:49:58.469376  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:58.474218  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 08:49:58.474233  431849 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 08:49:58.968854  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:58.973092  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 08:49:58.979192  431849 api_server.go:141] control plane version: v1.34.1
	I0929 08:49:58.979213  431849 api_server.go:131] duration metric: took 2.510428568s to wait for apiserver health ...
	I0929 08:49:58.979225  431849 cni.go:84] Creating CNI manager for ""
	I0929 08:49:58.979232  431849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:49:58.981328  431849 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 08:49:58.982691  431849 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 08:49:58.987043  431849 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I0929 08:49:58.987056  431849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 08:49:59.006572  431849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 08:49:59.303417  431849 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:49:59.307085  431849 system_pods.go:59] 8 kube-system pods found
	I0929 08:49:59.307113  431849 system_pods.go:61] "coredns-66bc5c9577-qn4f9" [5ebe4871-c870-4dc3-b427-56b94b73b2b7] Running
	I0929 08:49:59.307122  431849 system_pods.go:61] "etcd-functional-580781" [4655d71f-e31a-49e8-8f9c-870391100bd2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 08:49:59.307126  431849 system_pods.go:61] "kindnet-pnn6t" [c1fe1e44-adab-40da-af6f-88ef5240ddcb] Running
	I0929 08:49:59.307132  431849 system_pods.go:61] "kube-apiserver-functional-580781" [e90ca276-8349-4cfe-802f-639d4729960f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 08:49:59.307136  431849 system_pods.go:61] "kube-controller-manager-functional-580781" [53e25779-10aa-4f85-9e92-6f79f32a551b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 08:49:59.307139  431849 system_pods.go:61] "kube-proxy-7zlkp" [63373f56-af01-4ce0-83c7-57300f541f3f] Running
	I0929 08:49:59.307144  431849 system_pods.go:61] "kube-scheduler-functional-580781" [3fd745a9-0cdb-4cb8-8922-68b16d4d84a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 08:49:59.307148  431849 system_pods.go:61] "storage-provisioner" [9404f44a-2a63-4a35-abd5-64f6a3e4fb2d] Running
	I0929 08:49:59.307155  431849 system_pods.go:74] duration metric: took 3.724567ms to wait for pod list to return data ...
	I0929 08:49:59.307162  431849 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:49:59.309840  431849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:49:59.309862  431849 node_conditions.go:123] node cpu capacity is 8
	I0929 08:49:59.309878  431849 node_conditions.go:105] duration metric: took 2.710645ms to run NodePressure ...
	I0929 08:49:59.309898  431849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 08:49:59.561323  431849 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 08:49:59.564139  431849 kubeadm.go:735] kubelet initialised
	I0929 08:49:59.564150  431849 kubeadm.go:736] duration metric: took 2.811345ms waiting for restarted kubelet to initialise ...
	I0929 08:49:59.564164  431849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 08:49:59.572650  431849 ops.go:34] apiserver oom_adj: -16
	I0929 08:49:59.572663  431849 kubeadm.go:593] duration metric: took 28.678753047s to restartPrimaryControlPlane
	I0929 08:49:59.572671  431849 kubeadm.go:394] duration metric: took 28.748856224s to StartCluster
	I0929 08:49:59.572691  431849 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:49:59.572771  431849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:49:59.573684  431849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:49:59.573930  431849 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 08:49:59.573990  431849 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 08:49:59.574078  431849 addons.go:69] Setting storage-provisioner=true in profile "functional-580781"
	I0929 08:49:59.574088  431849 addons.go:69] Setting default-storageclass=true in profile "functional-580781"
	I0929 08:49:59.574093  431849 addons.go:238] Setting addon storage-provisioner=true in "functional-580781"
	W0929 08:49:59.574100  431849 addons.go:247] addon storage-provisioner should already be in state true
	I0929 08:49:59.574102  431849 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-580781"
	I0929 08:49:59.574132  431849 host.go:66] Checking if "functional-580781" exists ...
	I0929 08:49:59.574155  431849 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:49:59.574445  431849 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
	I0929 08:49:59.574643  431849 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
	I0929 08:49:59.575577  431849 out.go:179] * Verifying Kubernetes components...
	I0929 08:49:59.576949  431849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 08:49:59.597587  431849 addons.go:238] Setting addon default-storageclass=true in "functional-580781"
	W0929 08:49:59.597597  431849 addons.go:247] addon default-storageclass should already be in state true
	I0929 08:49:59.597621  431849 host.go:66] Checking if "functional-580781" exists ...
	I0929 08:49:59.598060  431849 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
	I0929 08:49:59.598983  431849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 08:49:59.600672  431849 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:49:59.600684  431849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 08:49:59.600731  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:59.618514  431849 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 08:49:59.618539  431849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 08:49:59.618603  431849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
	I0929 08:49:59.626972  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:59.641007  431849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
	I0929 08:49:59.713803  431849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 08:49:59.727685  431849 node_ready.go:35] waiting up to 6m0s for node "functional-580781" to be "Ready" ...
	I0929 08:49:59.730543  431849 node_ready.go:49] node "functional-580781" is "Ready"
	I0929 08:49:59.730561  431849 node_ready.go:38] duration metric: took 2.847181ms for node "functional-580781" to be "Ready" ...
	I0929 08:49:59.730608  431849 api_server.go:52] waiting for apiserver process to appear ...
	I0929 08:49:59.730652  431849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 08:49:59.737878  431849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 08:49:59.743666  431849 api_server.go:72] duration metric: took 169.707848ms to wait for apiserver process to appear ...
	I0929 08:49:59.743687  431849 api_server.go:88] waiting for apiserver healthz status ...
	I0929 08:49:59.743708  431849 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 08:49:59.749413  431849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 08:49:59.749851  431849 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 08:49:59.750896  431849 api_server.go:141] control plane version: v1.34.1
	I0929 08:49:59.750913  431849 api_server.go:131] duration metric: took 7.219456ms to wait for apiserver health ...
	I0929 08:49:59.750923  431849 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 08:49:59.754175  431849 system_pods.go:59] 8 kube-system pods found
	I0929 08:49:59.754190  431849 system_pods.go:61] "coredns-66bc5c9577-qn4f9" [5ebe4871-c870-4dc3-b427-56b94b73b2b7] Running
	I0929 08:49:59.754193  431849 system_pods.go:61] "etcd-functional-580781" [4655d71f-e31a-49e8-8f9c-870391100bd2] Running
	I0929 08:49:59.754197  431849 system_pods.go:61] "kindnet-pnn6t" [c1fe1e44-adab-40da-af6f-88ef5240ddcb] Running
	I0929 08:49:59.754208  431849 system_pods.go:61] "kube-apiserver-functional-580781" [e90ca276-8349-4cfe-802f-639d4729960f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 08:49:59.754213  431849 system_pods.go:61] "kube-controller-manager-functional-580781" [53e25779-10aa-4f85-9e92-6f79f32a551b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 08:49:59.754217  431849 system_pods.go:61] "kube-proxy-7zlkp" [63373f56-af01-4ce0-83c7-57300f541f3f] Running
	I0929 08:49:59.754221  431849 system_pods.go:61] "kube-scheduler-functional-580781" [3fd745a9-0cdb-4cb8-8922-68b16d4d84a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 08:49:59.754224  431849 system_pods.go:61] "storage-provisioner" [9404f44a-2a63-4a35-abd5-64f6a3e4fb2d] Running
	I0929 08:49:59.754230  431849 system_pods.go:74] duration metric: took 3.302134ms to wait for pod list to return data ...
	I0929 08:49:59.754237  431849 default_sa.go:34] waiting for default service account to be created ...
	I0929 08:49:59.756663  431849 default_sa.go:45] found service account: "default"
	I0929 08:49:59.756676  431849 default_sa.go:55] duration metric: took 2.432824ms for default service account to be created ...
	I0929 08:49:59.756685  431849 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 08:49:59.760021  431849 system_pods.go:86] 8 kube-system pods found
	I0929 08:49:59.760040  431849 system_pods.go:89] "coredns-66bc5c9577-qn4f9" [5ebe4871-c870-4dc3-b427-56b94b73b2b7] Running
	I0929 08:49:59.760047  431849 system_pods.go:89] "etcd-functional-580781" [4655d71f-e31a-49e8-8f9c-870391100bd2] Running
	I0929 08:49:59.760052  431849 system_pods.go:89] "kindnet-pnn6t" [c1fe1e44-adab-40da-af6f-88ef5240ddcb] Running
	I0929 08:49:59.760061  431849 system_pods.go:89] "kube-apiserver-functional-580781" [e90ca276-8349-4cfe-802f-639d4729960f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 08:49:59.760068  431849 system_pods.go:89] "kube-controller-manager-functional-580781" [53e25779-10aa-4f85-9e92-6f79f32a551b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 08:49:59.760074  431849 system_pods.go:89] "kube-proxy-7zlkp" [63373f56-af01-4ce0-83c7-57300f541f3f] Running
	I0929 08:49:59.760081  431849 system_pods.go:89] "kube-scheduler-functional-580781" [3fd745a9-0cdb-4cb8-8922-68b16d4d84a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 08:49:59.760099  431849 system_pods.go:89] "storage-provisioner" [9404f44a-2a63-4a35-abd5-64f6a3e4fb2d] Running
	I0929 08:49:59.760112  431849 system_pods.go:126] duration metric: took 3.421356ms to wait for k8s-apps to be running ...
	I0929 08:49:59.760120  431849 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 08:49:59.760170  431849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 08:50:00.217685  431849 system_svc.go:56] duration metric: took 457.556356ms WaitForService to wait for kubelet
	I0929 08:50:00.217704  431849 kubeadm.go:578] duration metric: took 643.752989ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 08:50:00.217721  431849 node_conditions.go:102] verifying NodePressure condition ...
	I0929 08:50:00.219950  431849 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 08:50:00.219966  431849 node_conditions.go:123] node cpu capacity is 8
	I0929 08:50:00.219979  431849 node_conditions.go:105] duration metric: took 2.253498ms to run NodePressure ...
	I0929 08:50:00.219993  431849 start.go:241] waiting for startup goroutines ...
	I0929 08:50:00.225692  431849 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 08:50:00.226809  431849 addons.go:514] duration metric: took 652.832348ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 08:50:00.226853  431849 start.go:246] waiting for cluster config update ...
	I0929 08:50:00.226864  431849 start.go:255] writing updated cluster config ...
	I0929 08:50:00.227108  431849 ssh_runner.go:195] Run: rm -f paused
	I0929 08:50:00.230941  431849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:50:00.234133  431849 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn4f9" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:00.238626  431849 pod_ready.go:94] pod "coredns-66bc5c9577-qn4f9" is "Ready"
	I0929 08:50:00.238640  431849 pod_ready.go:86] duration metric: took 4.493608ms for pod "coredns-66bc5c9577-qn4f9" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:00.240646  431849 pod_ready.go:83] waiting for pod "etcd-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:00.244209  431849 pod_ready.go:94] pod "etcd-functional-580781" is "Ready"
	I0929 08:50:00.244221  431849 pod_ready.go:86] duration metric: took 3.564692ms for pod "etcd-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:00.246044  431849 pod_ready.go:83] waiting for pod "kube-apiserver-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 08:50:02.251094  431849 pod_ready.go:104] pod "kube-apiserver-functional-580781" is not "Ready", error: <nil>
	W0929 08:50:04.252231  431849 pod_ready.go:104] pod "kube-apiserver-functional-580781" is not "Ready", error: <nil>
	W0929 08:50:06.751991  431849 pod_ready.go:104] pod "kube-apiserver-functional-580781" is not "Ready", error: <nil>
	W0929 08:50:09.251459  431849 pod_ready.go:104] pod "kube-apiserver-functional-580781" is not "Ready", error: <nil>
	W0929 08:50:11.751577  431849 pod_ready.go:104] pod "kube-apiserver-functional-580781" is not "Ready", error: <nil>
	I0929 08:50:12.251980  431849 pod_ready.go:94] pod "kube-apiserver-functional-580781" is "Ready"
	I0929 08:50:12.251996  431849 pod_ready.go:86] duration metric: took 12.005942042s for pod "kube-apiserver-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.254173  431849 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.258082  431849 pod_ready.go:94] pod "kube-controller-manager-functional-580781" is "Ready"
	I0929 08:50:12.258093  431849 pod_ready.go:86] duration metric: took 3.908896ms for pod "kube-controller-manager-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.259948  431849 pod_ready.go:83] waiting for pod "kube-proxy-7zlkp" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.263711  431849 pod_ready.go:94] pod "kube-proxy-7zlkp" is "Ready"
	I0929 08:50:12.263723  431849 pod_ready.go:86] duration metric: took 3.760414ms for pod "kube-proxy-7zlkp" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.265509  431849 pod_ready.go:83] waiting for pod "kube-scheduler-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.450126  431849 pod_ready.go:94] pod "kube-scheduler-functional-580781" is "Ready"
	I0929 08:50:12.450143  431849 pod_ready.go:86] duration metric: took 184.622762ms for pod "kube-scheduler-functional-580781" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 08:50:12.450153  431849 pod_ready.go:40] duration metric: took 12.219185387s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 08:50:12.497062  431849 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 08:50:12.499777  431849 out.go:179] * Done! kubectl is now configured to use "functional-580781" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 08:55:09 functional-580781 crio[4228]: time="2025-09-29 08:55:09.437948403Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9c07119b-cfc6-4212-bf35-941a5fc89bda name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:09 functional-580781 crio[4228]: time="2025-09-29 08:55:09.438178078Z" level=info msg="Image docker.io/mysql:5.7 not found" id=9c07119b-cfc6-4212-bf35-941a5fc89bda name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:17 functional-580781 crio[4228]: time="2025-09-29 08:55:17.438355705Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7453e19b-ad13-48df-b786-13edbee35143 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:17 functional-580781 crio[4228]: time="2025-09-29 08:55:17.438631459Z" level=info msg="Image docker.io/nginx:alpine not found" id=7453e19b-ad13-48df-b786-13edbee35143 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:22 functional-580781 crio[4228]: time="2025-09-29 08:55:22.438001220Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=09426fee-445f-49b5-89aa-11fda2802bc6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:22 functional-580781 crio[4228]: time="2025-09-29 08:55:22.438268241Z" level=info msg="Image docker.io/mysql:5.7 not found" id=09426fee-445f-49b5-89aa-11fda2802bc6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:31 functional-580781 crio[4228]: time="2025-09-29 08:55:31.437744698Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=43d35710-4ac1-4e68-8a9b-98b04b8c3901 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:31 functional-580781 crio[4228]: time="2025-09-29 08:55:31.438047973Z" level=info msg="Image docker.io/nginx:alpine not found" id=43d35710-4ac1-4e68-8a9b-98b04b8c3901 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:43 functional-580781 crio[4228]: time="2025-09-29 08:55:43.438141937Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a1a6420f-5b30-4e38-8e33-2fe2c396bbe7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:43 functional-580781 crio[4228]: time="2025-09-29 08:55:43.438419945Z" level=info msg="Image docker.io/nginx:alpine not found" id=a1a6420f-5b30-4e38-8e33-2fe2c396bbe7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:48 functional-580781 crio[4228]: time="2025-09-29 08:55:48.820072022Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=df44d232-6b69-46cf-a250-f5b5732d7506 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:55:48 functional-580781 crio[4228]: time="2025-09-29 08:55:48.820770408Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=6d85994f-b8e7-4230-819d-fe7166f5b013 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:55:48 functional-580781 crio[4228]: time="2025-09-29 08:55:48.837042914Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 29 08:55:56 functional-580781 crio[4228]: time="2025-09-29 08:55:56.438291178Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=017d8367-2f62-4ad2-814a-992df4f35bd1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:56 functional-580781 crio[4228]: time="2025-09-29 08:55:56.438574505Z" level=info msg="Image docker.io/nginx:alpine not found" id=017d8367-2f62-4ad2-814a-992df4f35bd1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.042631145Z" level=info msg="Running pod sandbox: default/hello-node-connect-7d85dfc575-thgc5/POD" id=4e07564a-a581-442b-8fb4-55a4eabf1f49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.042708063Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.056759802Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-thgc5 Namespace:default ID:3cc7296d536cfaccc9d77390bca3be319ac6493526f632db51a0a4da464232ff UID:fa8e859c-e2eb-4366-bf33-3fbbc9df80d6 NetNS:/var/run/netns/be687461-bfde-47e7-ab8c-eae6dad4e7b8 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.056791131Z" level=info msg="Adding pod default_hello-node-connect-7d85dfc575-thgc5 to CNI network \"kindnet\" (type=ptp)"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.066738358Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-thgc5 Namespace:default ID:3cc7296d536cfaccc9d77390bca3be319ac6493526f632db51a0a4da464232ff UID:fa8e859c-e2eb-4366-bf33-3fbbc9df80d6 NetNS:/var/run/netns/be687461-bfde-47e7-ab8c-eae6dad4e7b8 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.066937719Z" level=info msg="Checking pod default_hello-node-connect-7d85dfc575-thgc5 for CNI network kindnet (type=ptp)"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.067678688Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 08:55:58 functional-580781 crio[4228]: time="2025-09-29 08:55:58.068452431Z" level=info msg="Ran pod sandbox 3cc7296d536cfaccc9d77390bca3be319ac6493526f632db51a0a4da464232ff with infra container: default/hello-node-connect-7d85dfc575-thgc5/POD" id=4e07564a-a581-442b-8fb4-55a4eabf1f49 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 08:56:19 functional-580781 crio[4228]: time="2025-09-29 08:56:19.498737826Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=ab34152e-e20f-4019-a2cd-ab84d53108ca name=/runtime.v1.ImageService/PullImage
	Sep 29 08:56:19 functional-580781 crio[4228]: time="2025-09-29 08:56:19.502531661Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db229b500cea2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   a56edad455b36       busybox-mount
	3201afa40ac94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   82f71d0ce1af3       kube-apiserver-functional-580781
	346cf15effa51       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            1                   56f4894c02564       kube-scheduler-functional-580781
	47f1c99fd1006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   2                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	06427c125c739       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   0823e3669f061       etcd-functional-580781
	1a6c4fa503da3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Exited              kube-controller-manager   1                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	ef2ab2b48d81a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                1                   630401fd11ff4       kube-proxy-7zlkp
	419813926dfe4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      6 minutes ago       Running             kindnet-cni               1                   c865c04855dee       kindnet-pnn6t
	3ba534cc9995f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   572ac443fe212       storage-provisioner
	0c420a09ed822       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	49f5f6ce9ff79       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	8fa1b4de8244f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   572ac443fe212       storage-provisioner
	1bfc7f0b08c9e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   c865c04855dee       kindnet-pnn6t
	3cf0b4c8c0eff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                0                   630401fd11ff4       kube-proxy-7zlkp
	83f4e402f8920       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      0                   0823e3669f061       etcd-functional-580781
	31ff02ffd0a6d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            0                   56f4894c02564       kube-scheduler-functional-580781
	
	
	==> coredns [0c420a09ed82237c3eba1aa280297cf3d6eef42b2c186b93991ad924d809a5b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32989 - 54781 "HINFO IN 1322808675416363747.3298756715011358413. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079188165s
	
	
	==> coredns [49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39260 - 3064 "HINFO IN 8182008874646901959.6041357028063081178. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094703399s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-580781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-580781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=functional-580781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_48_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-580781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 08:56:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:52:30 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:52:30 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:52:30 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:52:30 +0000   Mon, 29 Sep 2025 08:49:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-580781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 565a9e40e71a440f889c5f66396fc290
	  System UUID:                10e5194d-9350-4f16-9277-d0c31ca42e51
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rxhk2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     hello-node-connect-7d85dfc575-thgc5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     mysql-5bb876957f-g7nlv                       600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m9s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-qn4f9                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m32s
	  kube-system                 etcd-functional-580781                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m37s
	  kube-system                 kindnet-pnn6t                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m32s
	  kube-system                 kube-apiserver-functional-580781             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-580781    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-proxy-7zlkp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-scheduler-functional-580781             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  Starting                 6m51s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m42s (x8 over 7m42s)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     7m37s                  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m37s                  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s                  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m37s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m33s                  node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	  Normal  NodeReady                7m21s                  kubelet          Node functional-580781 status is now: NodeReady
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x8 over 6m33s)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s                  node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [06427c125c739d8a8454d779cd4b1110ffca144587807bfc615ab7ba3aa85f21] <==
	{"level":"warn","ts":"2025-09-29T08:49:56.733481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.743555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.752197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.759252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.765404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.771892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.778356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.784387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.791034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.804065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.811416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.818910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.827589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.835603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.842079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.849060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.855818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.868051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.874174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.898754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.904987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.911930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.955188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	
	
	==> etcd [83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4] <==
	{"level":"warn","ts":"2025-09-29T08:48:48.105391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.112542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.118842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.131384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.137802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.144433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.191696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:49:36.606109Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T08:49:36.606181Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T08:49:36.606264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608389Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T08:49:43.608451Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608422Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608436Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608489Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T08:49:43.608500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T08:49:43.608502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608467Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T08:49:43.610858Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T08:49:43.611011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.611040Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T08:49:43.611047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:56:28 up  2:38,  0 users,  load average: 0.08, 0.22, 0.43
	Linux functional-580781 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0] <==
	I0929 08:48:57.458656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 08:48:57.458926       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 08:48:57.459093       1 main.go:148] setting mtu 1500 for CNI 
	I0929 08:48:57.459112       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 08:48:57.459139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T08:48:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 08:48:57.660610       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 08:48:57.660631       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 08:48:57.660640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 08:48:57.754818       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 08:48:58.060813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 08:48:58.060862       1 metrics.go:72] Registering metrics
	I0929 08:48:58.060920       1 controller.go:711] "Syncing nftables rules"
	I0929 08:49:07.661018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:07.661159       1 main.go:301] handling current node
	I0929 08:49:17.661209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:17.661245       1 main.go:301] handling current node
	I0929 08:49:27.665005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:27.665053       1 main.go:301] handling current node
	
	
	==> kindnet [419813926dfe4f3e19e4ed90e311ff20fe542f74f8ebf0dc42045be7549c7203] <==
	I0929 08:54:27.802729       1 main.go:301] handling current node
	I0929 08:54:37.803392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:54:37.803462       1 main.go:301] handling current node
	I0929 08:54:47.804324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:54:47.804362       1 main.go:301] handling current node
	I0929 08:54:57.803605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:54:57.803633       1 main.go:301] handling current node
	I0929 08:55:07.810909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:07.810949       1 main.go:301] handling current node
	I0929 08:55:17.803174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:17.803206       1 main.go:301] handling current node
	I0929 08:55:27.803329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:27.803382       1 main.go:301] handling current node
	I0929 08:55:37.802651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:37.802696       1 main.go:301] handling current node
	I0929 08:55:47.811425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:47.811462       1 main.go:301] handling current node
	I0929 08:55:57.803202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:55:57.803239       1 main.go:301] handling current node
	I0929 08:56:07.806116       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:56:07.806163       1 main.go:301] handling current node
	I0929 08:56:17.804730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:56:17.804769       1 main.go:301] handling current node
	I0929 08:56:27.803075       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:56:27.803107       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3201afa40ac947ad27f530616359700f2260d511660f89535877216d9ccda60f] <==
	I0929 08:49:57.420512       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0929 08:49:57.422474       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 08:49:57.427604       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0929 08:49:57.427642       1 aggregator.go:171] initial CRD sync complete...
	I0929 08:49:57.427650       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 08:49:57.427655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 08:49:57.427659       1 cache.go:39] Caches are synced for autoregister controller
	I0929 08:49:57.428967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 08:49:57.428988       1 policy_source.go:240] refreshing policies
	I0929 08:49:57.451215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 08:49:57.452554       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 08:49:58.320496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0929 08:49:58.525606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 08:49:58.526853       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 08:49:58.531444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 08:49:59.297430       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 08:49:59.400126       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 08:49:59.467824       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 08:49:59.473940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 08:50:01.045050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 08:50:15.663020       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.209.181"}
	I0929 08:50:19.847113       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.150.212"}
	I0929 08:50:21.656932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.202.166"}
	I0929 08:52:30.762468       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.76.169"}
	I0929 08:55:57.798872       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.203.171"}
	
	
	==> kube-controller-manager [1a6c4fa503da3ece68dc966f8fd6d8ebafc5d006b9831ba53bd6369943bfd8a8] <==
	I0929 08:49:46.426216       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0929 08:49:46.426241       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I0929 08:49:46.477303       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0929 08:49:46.477328       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:49:46.477387       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.477602       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0929 08:49:46.477627       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I0929 08:49:46.477678       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478062       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0929 08:49:46.478084       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:49:46.478100       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478446       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0929 08:49:46.478471       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0929 08:49:46.478527       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0929 08:49:46.478544       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:49:46.478586       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.525502       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I0929 08:49:46.525575       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0929 08:49:46.525583       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I0929 08:49:46.576219       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I0929 08:49:46.576246       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0929 08:49:46.576263       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I0929 08:49:46.576298       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0929 08:49:46.576312       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	F0929 08:49:47.625587       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [47f1c99fd1006fd2040b7a6a3a2e570a4c9366287bc4a9bb519ddf562e9c5ea9] <==
	I0929 08:50:00.709093       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 08:50:00.710283       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 08:50:00.739765       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 08:50:00.739886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 08:50:00.740155       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 08:50:00.740164       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 08:50:00.740244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:50:00.740360       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 08:50:00.741441       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 08:50:00.741457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 08:50:00.741489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:50:00.741500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 08:50:00.741534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 08:50:00.741600       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 08:50:00.741671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 08:50:00.741777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-580781"
	I0929 08:50:00.741828       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 08:50:00.742012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:50:00.743274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:50:00.743330       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:50:00.743372       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 08:50:00.744533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:50:00.744549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:50:00.745371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:50:00.763731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510] <==
	I0929 08:48:57.328633       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:48:57.398736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:48:57.499156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:48:57.499205       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:48:57.499363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:48:57.517179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:48:57.517238       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:48:57.522369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:48:57.522730       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:48:57.522759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:48:57.524004       1 config.go:200] "Starting service config controller"
	I0929 08:48:57.524408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:48:57.524611       1 config.go:309] "Starting node config controller"
	I0929 08:48:57.524638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:48:57.525031       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:48:57.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:48:57.525096       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:48:57.525103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:48:57.624518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:48:57.625676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:48:57.625779       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:48:57.625802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ef2ab2b48d81ada5a6d38c217b125bc7066f486fe3d353763fa03f3e46cf1062] <==
	I0929 08:49:37.495720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:49:37.595898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:49:37.595958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:49:37.596323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:49:37.616663       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:49:37.616736       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:49:37.622131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:49:37.622572       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:49:37.622607       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:37.623810       1 config.go:200] "Starting service config controller"
	I0929 08:49:37.623827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:49:37.623926       1 config.go:309] "Starting node config controller"
	I0929 08:49:37.623973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:49:37.624025       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:49:37.624039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:49:37.624063       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:49:37.624068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:49:37.724863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:49:37.724889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:49:37.724902       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 08:49:37.724927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="nodes \"functional-580781\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:49:57.362283       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:49:57.362240       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8] <==
	E0929 08:48:48.629336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:48.629343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:48.629190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:48:48.629108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:48.629583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:48.629604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.481623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 08:48:49.529255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 08:48:49.592487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:49.604559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:49.694389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:48:49.697302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:48:49.731820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:48:49.745001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:48:49.759498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:49.789574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.801523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:49.827015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 08:48:50.224669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 08:49:53.820584       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820641       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 08:49:53.820662       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 08:49:53.820691       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 08:49:53.820718       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [346cf15effa5119adbb50a15e72686cb099db1666fa69bfc2a68c8fe414f1503] <==
	I0929 08:49:56.473573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 08:49:57.340173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 08:49:57.340209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 08:49:57.340222       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 08:49:57.340232       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 08:49:57.364559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 08:49:57.364580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:57.366868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.366910       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.367205       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 08:49:57.367245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 08:49:57.467721       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.819964    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(fef4d926-fc98-4617-80db-05dd451129c3): ErrImagePull: loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.820055    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.820391    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.820438    5417 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.820623    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-rxhk2_default(c14b0343-8ceb-4ede-99c3-a1a1c337e9ab): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 29 08:55:48 functional-580781 kubelet[5417]: E0929 08:55:48.821906    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 08:55:55 functional-580781 kubelet[5417]: E0929 08:55:55.493172    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136155492916749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:55:55 functional-580781 kubelet[5417]: E0929 08:55:55.493215    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136155492916749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:55:57 functional-580781 kubelet[5417]: I0929 08:55:57.818717    5417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvn7\" (UniqueName: \"kubernetes.io/projected/fa8e859c-e2eb-4366-bf33-3fbbc9df80d6-kube-api-access-5cvn7\") pod \"hello-node-connect-7d85dfc575-thgc5\" (UID: \"fa8e859c-e2eb-4366-bf33-3fbbc9df80d6\") " pod="default/hello-node-connect-7d85dfc575-thgc5"
	Sep 29 08:56:02 functional-580781 kubelet[5417]: E0929 08:56:02.438106    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 08:56:03 functional-580781 kubelet[5417]: E0929 08:56:03.439528    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 08:56:05 functional-580781 kubelet[5417]: E0929 08:56:05.494614    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136165494383207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:05 functional-580781 kubelet[5417]: E0929 08:56:05.494646    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136165494383207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:13 functional-580781 kubelet[5417]: E0929 08:56:13.438042    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 08:56:15 functional-580781 kubelet[5417]: E0929 08:56:15.439222    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 08:56:15 functional-580781 kubelet[5417]: E0929 08:56:15.496219    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136175495952513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:15 functional-580781 kubelet[5417]: E0929 08:56:15.496254    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136175495952513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:19 functional-580781 kubelet[5417]: E0929 08:56:19.498231    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 08:56:19 functional-580781 kubelet[5417]: E0929 08:56:19.498293    5417 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 08:56:19 functional-580781 kubelet[5417]: E0929 08:56:19.498513    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-g7nlv_default(3607a95a-4566-4989-b37c-ed726517bf99): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:56:19 functional-580781 kubelet[5417]: E0929 08:56:19.498583    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 08:56:25 functional-580781 kubelet[5417]: E0929 08:56:25.437817    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 08:56:25 functional-580781 kubelet[5417]: E0929 08:56:25.497638    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136185497399368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:25 functional-580781 kubelet[5417]: E0929 08:56:25.497671    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136185497399368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:56:26 functional-580781 kubelet[5417]: E0929 08:56:26.438196    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	
	
	==> storage-provisioner [3ba534cc9995fbd82b83b955735dab9de1c54de1d8fd7119eccb782d77fe63fd] <==
	W0929 08:56:04.444008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:06.447547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:06.451488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:08.454468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:08.458476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:10.461542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:10.465495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:12.468339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:12.472451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:14.475553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:14.479453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:16.482819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:16.486953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:18.490642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:18.494592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:20.497262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:20.501371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:22.504578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:22.509745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:24.512911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:24.516905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:26.519860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:26.523555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:28.526819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:56:28.530640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae] <==
	W0929 08:49:10.392985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.396758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.400952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.404972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.410240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.414556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.418891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.422630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.426730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.430506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.434896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.437786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.441661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.444570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.448383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.451410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.456736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.460215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.464644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.467426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.475151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.478302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.482203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.485888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.489874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
helpers_test.go:269: (dbg) Run:  kubectl --context functional-580781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://db229b500cea2a9d934455d2b9a59a2e28deb77a8bbc7c217b4b73c4b22b9246
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 08:52:24 +0000
	      Finished:     Mon, 29 Sep 2025 08:52:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgs2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qgs2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m59s  default-scheduler  Successfully assigned default/busybox-mount to functional-580781
	  Normal  Pulling    5m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m5s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.114s (1m53.786s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m5s   kubelet            Created container: mount-munger
	  Normal  Started    4m5s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rxhk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:52:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j626 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8j626:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m59s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rxhk2 to functional-580781
	  Normal   Pulling    93s (x3 over 3m58s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     41s (x3 over 2m58s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     41s (x3 over 2m58s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x5 over 2m58s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x5 over 2m58s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-thgc5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5cvn7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  32s   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
	  Normal  Pulling    31s   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-g7nlv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:19 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnqlc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pnqlc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m10s                default-scheduler  Successfully assigned default/mysql-5bb876957f-g7nlv to functional-580781
	  Warning  Failed     5m8s                 kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    80s (x5 over 5m7s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     80s (x5 over 5m7s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x4 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     10s (x4 over 5m8s)   kubelet            Error: ErrImagePull
	  Warning  Failed     10s (x3 over 3m35s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpvw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpvw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-580781
	  Warning  Failed     2m58s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s (x2 over 4m37s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s (x3 over 4m37s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    46s (x5 over 4m37s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     46s (x5 over 4m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x4 over 6m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpg5f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kpg5f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-580781
	  Warning  Failed     2m27s (x2 over 4m6s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m2s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     41s (x3 over 4m6s)    kubelet            Error: ErrImagePull
	  Warning  Failed     41s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x5 over 4m6s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x5 over 4m6s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-580781 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-g7nlv" [3607a95a-4566-4989-b37c-ed726517bf99] Pending
helpers_test.go:352: "mysql-5bb876957f-g7nlv" [3607a95a-4566-4989-b37c-ed726517bf99] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 09:00:20.197543584 +0000 UTC m=+1867.844169041
functional_test.go:1804: (dbg) Run:  kubectl --context functional-580781 describe po mysql-5bb876957f-g7nlv -n default
functional_test.go:1804: (dbg) kubectl --context functional-580781 describe po mysql-5bb876957f-g7nlv -n default:
Name:             mysql-5bb876957f-g7nlv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:50:19 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnqlc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pnqlc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-g7nlv to functional-580781
Warning  Failed     8m59s                kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m38s (x5 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     55s (x5 over 8m59s)  kubelet            Error: ErrImagePull
Warning  Failed     55s (x4 over 7m26s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    8s (x15 over 8m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     8s (x15 over 8m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-580781 logs mysql-5bb876957f-g7nlv -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-580781 logs mysql-5bb876957f-g7nlv -n default: exit status 1 (68.448619ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-g7nlv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-580781 logs mysql-5bb876957f-g7nlv -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-580781
helpers_test.go:243: (dbg) docker inspect functional-580781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	        "Created": "2025-09-29T08:48:33.034529223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T08:48:33.070958392Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/hosts",
	        "LogPath": "/var/lib/docker/containers/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0/38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0-json.log",
	        "Name": "/functional-580781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-580781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-580781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "38862aa7a2bf892557846c566b9ea5372995c0f11703bb27a3c8ddeac626b1d0",
	                "LowerDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7f573b69e680972525e9a1c1e542f43bb129b25391ef6e32aa7685ea4274d361/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-580781",
	                "Source": "/var/lib/docker/volumes/functional-580781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-580781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-580781",
	                "name.minikube.sigs.k8s.io": "functional-580781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b37cbd8f035d18de42849ede2340b295b85fe84979fff6ab1cec7b19304cded",
	            "SandboxKey": "/var/run/docker/netns/5b37cbd8f035",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-580781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:69:98:c1:90:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "495c1eb850caf76b3c694e019686a6cae7865db2cadf61ef3a9e798cb0bdad99",
	                    "EndpointID": "8c180be2c2eda60e41070ee44e33e49d42b76851992a3e20cd0612627b94aff0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-580781",
	                        "38862aa7a2bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-580781 -n functional-580781
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs -n 25: (1.415763903s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdany-port2091007709/001:/mount-9p --alsologtostderr -v=1                   │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh       │ functional-580781 ssh -- ls -la /mount-9p                                                                                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh       │ functional-580781 ssh cat /mount-9p/test-1759135828609145608                                                                      │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:50 UTC │ 29 Sep 25 08:50 UTC │
	│ ssh       │ functional-580781 ssh stat /mount-9p/created-by-test                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh stat /mount-9p/created-by-pod                                                                               │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh sudo umount -f /mount-9p                                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount     │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdspecific-port2497644652/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh -- ls -la /mount-9p                                                                                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh sudo umount -f /mount-9p                                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount     │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount2 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount1                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount     │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ mount     │ -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1                 │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ ssh       │ functional-580781 ssh findmnt -T /mount1                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh findmnt -T /mount2                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ ssh       │ functional-580781 ssh findmnt -T /mount3                                                                                          │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │ 29 Sep 25 08:52 UTC │
	│ mount     │ -p functional-580781 --kill=true                                                                                                  │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:52 UTC │                     │
	│ start     │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start     │ -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ start     │ -p functional-580781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-580781 --alsologtostderr -v=1                                                                    │ functional-580781 │ jenkins │ v1.37.0 │ 29 Sep 25 08:56 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:56:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:56:30.117689  444267 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:56:30.117944  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.117953  444267 out.go:374] Setting ErrFile to fd 2...
	I0929 08:56:30.117957  444267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:30.118174  444267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:56:30.118633  444267 out.go:368] Setting JSON to false
	I0929 08:56:30.119597  444267 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9539,"bootTime":1759126651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:56:30.119701  444267 start.go:140] virtualization: kvm guest
	I0929 08:56:30.121870  444267 out.go:179] * [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:56:30.123224  444267 notify.go:220] Checking for updates...
	I0929 08:56:30.123247  444267 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:56:30.124827  444267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:56:30.126381  444267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:56:30.127858  444267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:56:30.129178  444267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:56:30.130586  444267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:56:30.132381  444267 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:56:30.132938  444267 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:56:30.157005  444267 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:56:30.157169  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.211025  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.201186795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.211131  444267 docker.go:318] overlay module found
	I0929 08:56:30.212976  444267 out.go:179] * Using the docker driver based on existing profile
	I0929 08:56:30.214014  444267 start.go:304] selected driver: docker
	I0929 08:56:30.214028  444267 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.214115  444267 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:56:30.214224  444267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.267814  444267 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.258243285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.268554  444267 cni.go:84] Creating CNI manager for ""
	I0929 08:56:30.268633  444267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:56:30.268696  444267 start.go:348] cluster config:
	{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.270539  444267 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 08:58:10 functional-580781 crio[4228]: time="2025-09-29 08:58:10.438182921Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=455f1674-cdcd-46fb-8b71-8b67112199f2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:58:13 functional-580781 crio[4228]: time="2025-09-29 08:58:13.437665017Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=af303570-16f8-41a5-91ee-3a2fe0b6eef3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:58:13 functional-580781 crio[4228]: time="2025-09-29 08:58:13.437976751Z" level=info msg="Image docker.io/nginx:alpine not found" id=af303570-16f8-41a5-91ee-3a2fe0b6eef3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:58:24 functional-580781 crio[4228]: time="2025-09-29 08:58:24.340822226Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=26399751-a142-4eb3-8918-56e2db092904 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:58:24 functional-580781 crio[4228]: time="2025-09-29 08:58:24.341525253Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=47adccd1-721d-430d-b10f-5a903ff5293a name=/runtime.v1.ImageService/PullImage
	Sep 29 08:58:24 functional-580781 crio[4228]: time="2025-09-29 08:58:24.342154581Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1cf6d9e9-50ae-479b-92c1-4dcd1ce3444b name=/runtime.v1.ImageService/PullImage
	Sep 29 08:58:24 functional-580781 crio[4228]: time="2025-09-29 08:58:24.356971593Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 08:58:55 functional-580781 crio[4228]: time="2025-09-29 08:58:55.015682016Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=26c4a7c1-cd36-495b-9c00-5a3c25c9858a name=/runtime.v1.ImageService/PullImage
	Sep 29 08:58:55 functional-580781 crio[4228]: time="2025-09-29 08:58:55.017090563Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 29 08:59:07 functional-580781 crio[4228]: time="2025-09-29 08:59:07.438442861Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0375706c-fe16-4868-b511-78c77480745f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:07 functional-580781 crio[4228]: time="2025-09-29 08:59:07.438795671Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0375706c-fe16-4868-b511-78c77480745f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:22 functional-580781 crio[4228]: time="2025-09-29 08:59:22.438159173Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a3b45749-61e6-4a7b-8f74-a867dd37486f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:22 functional-580781 crio[4228]: time="2025-09-29 08:59:22.438436327Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a3b45749-61e6-4a7b-8f74-a867dd37486f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:25 functional-580781 crio[4228]: time="2025-09-29 08:59:25.672041248Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=1113928b-5c64-440b-acc7-5831cc3c7e08 name=/runtime.v1.ImageService/PullImage
	Sep 29 08:59:25 functional-580781 crio[4228]: time="2025-09-29 08:59:25.677411617Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 29 08:59:37 functional-580781 crio[4228]: time="2025-09-29 08:59:37.439822109Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=2ff05b2d-3eb7-4b7b-ba04-5209f212bd40 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:37 functional-580781 crio[4228]: time="2025-09-29 08:59:37.440035453Z" level=info msg="Image docker.io/mysql:5.7 not found" id=2ff05b2d-3eb7-4b7b-ba04-5209f212bd40 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:48 functional-580781 crio[4228]: time="2025-09-29 08:59:48.437709679Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=a90b5d10-eda6-4f68-8d61-86a3d4fdbcfc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:48 functional-580781 crio[4228]: time="2025-09-29 08:59:48.437975365Z" level=info msg="Image docker.io/mysql:5.7 not found" id=a90b5d10-eda6-4f68-8d61-86a3d4fdbcfc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:59 functional-580781 crio[4228]: time="2025-09-29 08:59:59.437628532Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=5a670127-a7fd-4d79-8788-1574cc1fa008 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 08:59:59 functional-580781 crio[4228]: time="2025-09-29 08:59:59.437881551Z" level=info msg="Image docker.io/mysql:5.7 not found" id=5a670127-a7fd-4d79-8788-1574cc1fa008 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:10 functional-580781 crio[4228]: time="2025-09-29 09:00:10.737570664Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=9af9ade8-176a-470b-ae15-faab4685f723 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:00:10 functional-580781 crio[4228]: time="2025-09-29 09:00:10.738960482Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 09:00:12 functional-580781 crio[4228]: time="2025-09-29 09:00:12.437958985Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=36a29baf-5014-4bf4-8009-a6dd340398ce name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:00:12 functional-580781 crio[4228]: time="2025-09-29 09:00:12.438166006Z" level=info msg="Image docker.io/mysql:5.7 not found" id=36a29baf-5014-4bf4-8009-a6dd340398ce name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db229b500cea2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              mount-munger              0                   a56edad455b36       busybox-mount
	3201afa40ac94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   82f71d0ce1af3       kube-apiserver-functional-580781
	346cf15effa51       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            1                   56f4894c02564       kube-scheduler-functional-580781
	47f1c99fd1006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	06427c125c739       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   0823e3669f061       etcd-functional-580781
	1a6c4fa503da3       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Exited              kube-controller-manager   1                   454b7ed6d8fc6       kube-controller-manager-functional-580781
	ef2ab2b48d81a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                1                   630401fd11ff4       kube-proxy-7zlkp
	419813926dfe4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               1                   c865c04855dee       kindnet-pnn6t
	3ba534cc9995f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       1                   572ac443fe212       storage-provisioner
	0c420a09ed822       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	49f5f6ce9ff79       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   6fa5626cbca36       coredns-66bc5c9577-qn4f9
	8fa1b4de8244f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       0                   572ac443fe212       storage-provisioner
	1bfc7f0b08c9e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   c865c04855dee       kindnet-pnn6t
	3cf0b4c8c0eff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                0                   630401fd11ff4       kube-proxy-7zlkp
	83f4e402f8920       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      0                   0823e3669f061       etcd-functional-580781
	31ff02ffd0a6d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            0                   56f4894c02564       kube-scheduler-functional-580781
	
	
	==> coredns [0c420a09ed82237c3eba1aa280297cf3d6eef42b2c186b93991ad924d809a5b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32989 - 54781 "HINFO IN 1322808675416363747.3298756715011358413. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079188165s
	
	
	==> coredns [49f5f6ce9ff790b03e61fd7896a8afab6e4397fde2de30ad9beb70e408aaab33] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39260 - 3064 "HINFO IN 8182008874646901959.6041357028063081178. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094703399s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-580781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-580781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=functional-580781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T08_48_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 08:48:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-580781
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:00:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 08:59:38 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 08:59:38 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 08:59:38 +0000   Mon, 29 Sep 2025 08:48:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 08:59:38 +0000   Mon, 29 Sep 2025 08:49:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-580781
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 565a9e40e71a440f889c5f66396fc290
	  System UUID:                10e5194d-9350-4f16-9277-d0c31ca42e51
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rxhk2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  default                     hello-node-connect-7d85dfc575-thgc5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  default                     mysql-5bb876957f-g7nlv                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-qn4f9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-580781                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-pnn6t                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-580781              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-580781     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-7zlkp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-580781              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m95gr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vt9lx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-580781 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-580781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-580781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-580781 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-580781 event: Registered Node functional-580781 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[ +16.774979] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 21 41 37 dd f5 08 06
	[  +0.000328] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 c1 1e f2 c6 d7 08 06
	[  +6.075530] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 33 34 7b 85 cf 08 06
	[  +0.055887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[Sep29 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 fb 19 b5 d0 db 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 42 d7 b9 86 85 be 08 06
	[  +6.806604] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[ +13.433681] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	[  +8.966707] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 f7 73 94 db cd 08 06
	[  +0.000344] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 60 bc 70 fa 16 08 06
	[Sep29 08:07] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 ad d0 02 25 47 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 0a d3 31 32 5c 08 06
	
	
	==> etcd [06427c125c739d8a8454d779cd4b1110ffca144587807bfc615ab7ba3aa85f21] <==
	{"level":"warn","ts":"2025-09-29T08:49:56.759252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.765404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.771892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.778356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.784387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.791034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.797707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.804065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.811416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.818910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.827589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.835603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.842079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.849060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.855818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.868051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.874174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.898754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.904987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.911930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:49:56.955188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:59:56.447779Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":998}
	{"level":"info","ts":"2025-09-29T08:59:56.456064Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":998,"took":"7.926939ms","hash":2515072890,"current-db-size-bytes":3457024,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":3457024,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2025-09-29T08:59:56.456115Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2515072890,"revision":998,"compact-revision":-1}
	
	
	==> etcd [83f4e402f8920eb6638d5298a5037cd5de57c6be5e15c02939e70e50cfeecab4] <==
	{"level":"warn","ts":"2025-09-29T08:48:48.105391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.112542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.118842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.131384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.137802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.144433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T08:48:48.191696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T08:49:36.606109Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T08:49:36.606181Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T08:49:36.606264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T08:49:43.608344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608389Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T08:49:43.608451Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608422Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608436Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T08:49:43.608489Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T08:49:43.608500Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T08:49:43.608502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.608467Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T08:49:43.610858Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T08:49:43.611011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T08:49:43.611040Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T08:49:43.611047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-580781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:00:21 up  2:42,  0 users,  load average: 0.36, 0.24, 0.38
	Linux functional-580781 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bfc7f0b08c9ebcb2de9450041b131d889b2c233a415db8de378bc8114a859d0] <==
	I0929 08:48:57.458656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 08:48:57.458926       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 08:48:57.459093       1 main.go:148] setting mtu 1500 for CNI 
	I0929 08:48:57.459112       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 08:48:57.459139       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T08:48:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 08:48:57.660610       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 08:48:57.660631       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 08:48:57.660640       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 08:48:57.754818       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 08:48:58.060813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 08:48:58.060862       1 metrics.go:72] Registering metrics
	I0929 08:48:58.060920       1 controller.go:711] "Syncing nftables rules"
	I0929 08:49:07.661018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:07.661159       1 main.go:301] handling current node
	I0929 08:49:17.661209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:17.661245       1 main.go:301] handling current node
	I0929 08:49:27.665005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:49:27.665053       1 main.go:301] handling current node
	
	
	==> kindnet [419813926dfe4f3e19e4ed90e311ff20fe542f74f8ebf0dc42045be7549c7203] <==
	I0929 08:58:17.809003       1 main.go:301] handling current node
	I0929 08:58:27.804957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:58:27.805000       1 main.go:301] handling current node
	I0929 08:58:37.802940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:58:37.802974       1 main.go:301] handling current node
	I0929 08:58:47.803743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:58:47.803783       1 main.go:301] handling current node
	I0929 08:58:57.808993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:58:57.809023       1 main.go:301] handling current node
	I0929 08:59:07.803089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:07.803135       1 main.go:301] handling current node
	I0929 08:59:17.811418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:17.811451       1 main.go:301] handling current node
	I0929 08:59:27.807141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:27.807180       1 main.go:301] handling current node
	I0929 08:59:37.803327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:37.803368       1 main.go:301] handling current node
	I0929 08:59:47.802970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:47.803008       1 main.go:301] handling current node
	I0929 08:59:57.802737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 08:59:57.802785       1 main.go:301] handling current node
	I0929 09:00:07.802977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:07.803007       1 main.go:301] handling current node
	I0929 09:00:17.810906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 09:00:17.810936       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3201afa40ac947ad27f530616359700f2260d511660f89535877216d9ccda60f] <==
	I0929 08:49:57.427650       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 08:49:57.427655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 08:49:57.427659       1 cache.go:39] Caches are synced for autoregister controller
	I0929 08:49:57.428967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 08:49:57.428988       1 policy_source.go:240] refreshing policies
	I0929 08:49:57.451215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 08:49:57.452554       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 08:49:58.320496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0929 08:49:58.525606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 08:49:58.526853       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 08:49:58.531444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 08:49:59.297430       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 08:49:59.400126       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 08:49:59.467824       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 08:49:59.473940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 08:50:01.045050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 08:50:15.663020       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.209.181"}
	I0929 08:50:19.847113       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.150.212"}
	I0929 08:50:21.656932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.202.166"}
	I0929 08:52:30.762468       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.76.169"}
	I0929 08:55:57.798872       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.203.171"}
	I0929 08:56:32.068798       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 08:56:32.186099       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.247.251"}
	I0929 08:56:32.201368       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.131.220"}
	I0929 08:59:57.353812       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1a6c4fa503da3ece68dc966f8fd6d8ebafc5d006b9831ba53bd6369943bfd8a8] <==
	I0929 08:49:46.426216       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0929 08:49:46.426241       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I0929 08:49:46.477303       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0929 08:49:46.477328       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:49:46.477387       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.477602       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0929 08:49:46.477627       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I0929 08:49:46.477678       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478062       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0929 08:49:46.478084       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:49:46.478100       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.478446       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0929 08:49:46.478471       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0929 08:49:46.478527       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0929 08:49:46.478544       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:49:46.478586       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0929 08:49:46.525502       1 controllermanager.go:781] "Started controller" controller="persistentvolume-protection-controller"
	I0929 08:49:46.525575       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0929 08:49:46.525583       1 shared_informer.go:349] "Waiting for caches to sync" controller="PV protection"
	I0929 08:49:46.576219       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I0929 08:49:46.576246       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0929 08:49:46.576263       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I0929 08:49:46.576298       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0929 08:49:46.576312       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	F0929 08:49:47.625587       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [47f1c99fd1006fd2040b7a6a3a2e570a4c9366287bc4a9bb519ddf562e9c5ea9] <==
	I0929 08:50:00.740244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 08:50:00.740360       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 08:50:00.741441       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 08:50:00.741457       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 08:50:00.741489       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 08:50:00.741500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 08:50:00.741534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 08:50:00.741600       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 08:50:00.741671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 08:50:00.741777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-580781"
	I0929 08:50:00.741828       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 08:50:00.742012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 08:50:00.743274       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 08:50:00.743330       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 08:50:00.743372       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 08:50:00.744533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 08:50:00.744549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 08:50:00.745371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 08:50:00.763731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 08:56:32.141757       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.141939       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.145888       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.146332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.150732       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 08:56:32.151510       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3cf0b4c8c0effc0aeb1abff6facee199b21b7d97c90bd0c05e96d5021d3dc510] <==
	I0929 08:48:57.328633       1 server_linux.go:53] "Using iptables proxy"
	I0929 08:48:57.398736       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:48:57.499156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:48:57.499205       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:48:57.499363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:48:57.517179       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:48:57.517238       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:48:57.522369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:48:57.522730       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:48:57.522759       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:48:57.524004       1 config.go:200] "Starting service config controller"
	I0929 08:48:57.524408       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:48:57.524611       1 config.go:309] "Starting node config controller"
	I0929 08:48:57.524638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:48:57.525031       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:48:57.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:48:57.525096       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:48:57.525103       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:48:57.624518       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:48:57.625676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 08:48:57.625779       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:48:57.625802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ef2ab2b48d81ada5a6d38c217b125bc7066f486fe3d353763fa03f3e46cf1062] <==
	I0929 08:49:37.495720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 08:49:37.595898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 08:49:37.595958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 08:49:37.596323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 08:49:37.616663       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 08:49:37.616736       1 server_linux.go:132] "Using iptables Proxier"
	I0929 08:49:37.622131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 08:49:37.622572       1 server.go:527] "Version info" version="v1.34.1"
	I0929 08:49:37.622607       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:37.623810       1 config.go:200] "Starting service config controller"
	I0929 08:49:37.623827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 08:49:37.623926       1 config.go:309] "Starting node config controller"
	I0929 08:49:37.623973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 08:49:37.624025       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 08:49:37.624039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 08:49:37.624063       1 config.go:106] "Starting endpoint slice config controller"
	I0929 08:49:37.624068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 08:49:37.724863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 08:49:37.724889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 08:49:37.724902       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 08:49:37.724927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="nodes \"functional-580781\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:49:57.362283       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 08:49:57.362242       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 08:49:57.362240       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [31ff02ffd0a6df9f923b935ec3ac237064568d5ed7d33e5e5f040dd3b43363c8] <==
	E0929 08:48:48.629336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:48.629343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:48.629190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 08:48:48.629108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:48.629583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:48.629604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.481623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 08:48:49.529255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 08:48:49.592487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 08:48:49.604559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 08:48:49.694389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 08:48:49.697302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 08:48:49.731820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 08:48:49.745001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 08:48:49.759498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 08:48:49.789574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 08:48:49.801523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 08:48:49.827015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 08:48:50.224669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820543       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 08:49:53.820584       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:53.820641       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 08:49:53.820662       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 08:49:53.820691       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 08:49:53.820718       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [346cf15effa5119adbb50a15e72686cb099db1666fa69bfc2a68c8fe414f1503] <==
	I0929 08:49:56.473573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 08:49:57.340173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 08:49:57.340209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 08:49:57.340222       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 08:49:57.340232       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 08:49:57.364559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 08:49:57.364580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 08:49:57.366868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.366910       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 08:49:57.367205       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 08:49:57.367245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 08:49:57.467721       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 08:59:25 functional-580781 kubelet[5417]: E0929 08:59:25.671574    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 08:59:25 functional-580781 kubelet[5417]: E0929 08:59:25.671637    5417 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 08:59:25 functional-580781 kubelet[5417]: E0929 08:59:25.671885    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-g7nlv_default(3607a95a-4566-4989-b37c-ed726517bf99): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 08:59:25 functional-580781 kubelet[5417]: E0929 08:59:25.671947    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 08:59:27 functional-580781 kubelet[5417]: E0929 08:59:27.438091    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 08:59:35 functional-580781 kubelet[5417]: E0929 08:59:35.523548    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136375523291995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:35 functional-580781 kubelet[5417]: E0929 08:59:35.523584    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136375523291995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:37 functional-580781 kubelet[5417]: E0929 08:59:37.440326    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 08:59:40 functional-580781 kubelet[5417]: E0929 08:59:40.438041    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="fef4d926-fc98-4617-80db-05dd451129c3"
	Sep 29 08:59:42 functional-580781 kubelet[5417]: E0929 08:59:42.437253    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-rxhk2" podUID="c14b0343-8ceb-4ede-99c3-a1a1c337e9ab"
	Sep 29 08:59:45 functional-580781 kubelet[5417]: E0929 08:59:45.524967    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136385524744417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:45 functional-580781 kubelet[5417]: E0929 08:59:45.525007    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136385524744417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:48 functional-580781 kubelet[5417]: E0929 08:59:48.438324    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 08:59:55 functional-580781 kubelet[5417]: E0929 08:59:55.526214    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136395525960707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:55 functional-580781 kubelet[5417]: E0929 08:59:55.526253    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136395525960707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 08:59:59 functional-580781 kubelet[5417]: E0929 08:59:59.438199    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:00:05 functional-580781 kubelet[5417]: E0929 09:00:05.527534    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136405527326131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 09:00:05 functional-580781 kubelet[5417]: E0929 09:00:05.527570    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136405527326131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 09:00:10 functional-580781 kubelet[5417]: E0929 09:00:10.737067    5417 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 09:00:10 functional-580781 kubelet[5417]: E0929 09:00:10.737138    5417 kuberuntime_image.go:43] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 09:00:10 functional-580781 kubelet[5417]: E0929 09:00:10.737362    5417 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-m95gr_kubernetes-dashboard(0ddcf29c-f780-4161-a48d-be224253d327): ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 09:00:10 functional-580781 kubelet[5417]: E0929 09:00:10.737425    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-m95gr" podUID="0ddcf29c-f780-4161-a48d-be224253d327"
	Sep 29 09:00:12 functional-580781 kubelet[5417]: E0929 09:00:12.438534    5417 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g7nlv" podUID="3607a95a-4566-4989-b37c-ed726517bf99"
	Sep 29 09:00:15 functional-580781 kubelet[5417]: E0929 09:00:15.528975    5417 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759136415528751725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 29 09:00:15 functional-580781 kubelet[5417]: E0929 09:00:15.529008    5417 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759136415528751725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	
	
	==> storage-provisioner [3ba534cc9995fbd82b83b955735dab9de1c54de1d8fd7119eccb782d77fe63fd] <==
	W0929 08:59:57.304498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:59:59.307533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:59:59.312682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:01.315681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:01.319452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:03.322560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:03.326302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:05.328780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:05.333693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:07.336952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:07.341148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:09.344465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:09.349920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:11.352965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:11.356797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:13.359569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:13.364565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:15.367387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:15.371991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:17.375630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:17.380900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:19.383680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:19.387462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:21.391532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:00:21.395894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8fa1b4de8244fe8931ed42057372c08bda84f704bec61fe8fb90b6020f8df7ae] <==
	W0929 08:49:10.392985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.396758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:12.400952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.404972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:14.410240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.414556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:16.418891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.422630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:18.426730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.430506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:20.434896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.437786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:22.441661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.444570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:24.448383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.451410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:26.456736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.460215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:28.464644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.467426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:30.475151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.478302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:32.482203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.485888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 08:49:34.489874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
helpers_test.go:269: (dbg) Run:  kubectl --context functional-580781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1 (100.257161ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:30 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://db229b500cea2a9d934455d2b9a59a2e28deb77a8bbc7c217b4b73c4b22b9246
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 08:52:24 +0000
	      Finished:     Mon, 29 Sep 2025 08:52:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgs2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qgs2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m52s  default-scheduler  Successfully assigned default/busybox-mount to functional-580781
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7m58s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.114s (1m53.786s including waiting). Image size: 4631262 bytes.
	  Normal  Created    7m58s  kubelet            Created container: mount-munger
	  Normal  Started    7m58s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-rxhk2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:52:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j626 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8j626:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  7m52s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rxhk2 to functional-580781
	  Warning  Failed     118s (x4 over 6m51s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     118s (x4 over 6m51s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    40s (x11 over 6m51s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     40s (x11 over 6m51s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    29s (x5 over 7m51s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-thgc5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:55:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvn7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5cvn7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m25s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-thgc5 to functional-580781
	  Warning  Failed     118s (x2 over 3m32s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     118s (x2 over 3m32s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    106s (x2 over 3m32s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     106s (x2 over 3m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    95s (x3 over 4m24s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-g7nlv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:19 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnqlc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pnqlc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-g7nlv to functional-580781
	  Warning  Failed     9m1s                 kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m40s (x5 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     57s (x5 over 9m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     57s (x4 over 7m28s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    10s (x15 over 9m)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     10s (x15 over 9m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:21 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpvw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpvw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-580781
	  Warning  Failed     6m51s                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m32s (x3 over 8m30s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m32s (x4 over 8m30s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m24s (x10 over 8m30s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m24s (x10 over 8m30s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m9s (x5 over 10m)      kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-580781/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 08:50:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpg5f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kpg5f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m55s                 default-scheduler  Successfully assigned default/sp-pod to functional-580781
	  Warning  Failed     4m34s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     118s (x3 over 7m59s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     118s (x4 over 7m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    42s (x11 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     42s (x11 over 7m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    29s (x5 over 9m55s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m95gr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vt9lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-580781 describe pod busybox-mount hello-node-75c85bcc94-rxhk2 hello-node-connect-7d85dfc575-thgc5 mysql-5bb876957f-g7nlv nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-m95gr kubernetes-dashboard-855c9754f9-vt9lx: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-580781 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [29b039fe-6e18-4585-9490-7bba9fa796cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-29 08:54:21.97400018 +0000 UTC m=+1509.620625623
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-580781 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-580781 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:50:21 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpvw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tfpvw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/nginx-svc to functional-580781
Warning  Failed     2m30s                kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     51s (x2 over 2m30s)  kubelet            Error: ErrImagePull
Warning  Failed     51s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    38s (x2 over 2m30s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     38s (x2 over 2m30s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    27s (x3 over 4m1s)   kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-580781 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-580781 logs nginx-svc -n default: exit status 1 (69.994673ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-580781 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-580781 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-580781 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rxhk2" [c14b0343-8ceb-4ede-99c3-a1a1c337e9ab] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 08:52:37.638868  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:53:59.560299  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-580781 -n functional-580781
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 09:02:31.073899804 +0000 UTC m=+1998.720525247
functional_test.go:1460: (dbg) Run:  kubectl --context functional-580781 describe po hello-node-75c85bcc94-rxhk2 -n default
functional_test.go:1460: (dbg) kubectl --context functional-580781 describe po hello-node-75c85bcc94-rxhk2 -n default:
Name:             hello-node-75c85bcc94-rxhk2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-580781/192.168.49.2
Start Time:       Mon, 29 Sep 2025 08:52:30 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8j626 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8j626:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-rxhk2 to functional-580781
Normal   Pulling    2m38s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     79s (x5 over 9m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     79s (x5 over 9m)     kubelet            Error: ErrImagePull
Warning  Failed     15s (x16 over 9m)    kubelet            Error: ImagePullBackOff
Normal   BackOff    1s (x17 over 9m)     kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-580781 logs hello-node-75c85bcc94-rxhk2 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-580781 logs hello-node-75c85bcc94-rxhk2 -n default: exit status 1 (69.890381ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-rxhk2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-580781 logs hello-node-75c85bcc94-rxhk2 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0929 08:54:22.112102  386225 retry.go:31] will retry after 3.390589009s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:54:25.503296  386225 retry.go:31] will retry after 3.888633638s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:54:29.392944  386225 retry.go:31] will retry after 6.294823232s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:54:35.688325  386225 retry.go:31] will retry after 11.965420968s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:54:47.654759  386225 retry.go:31] will retry after 10.467525575s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:54:58.122902  386225 retry.go:31] will retry after 13.149235385s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:55:11.272779  386225 retry.go:31] will retry after 18.17936829s: Temporary Error: Get "http:": http: no Host in request URL
I0929 08:55:29.453312  386225 retry.go:31] will retry after 26.840980397s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-580781 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.109.202.166   10.109.202.166   80:31388/TCP   5m35s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 service --namespace=default --https --url hello-node: exit status 115 (521.421985ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32088
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-580781 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 service hello-node --url --format={{.IP}}: exit status 115 (532.090068ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-580781 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 service hello-node --url: exit status 115 (525.057833ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32088
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-580781 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32088
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vx2xv" [57bd21d6-20a9-46cb-bf7d-d51a2c29739e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:45:37.896507209 +0000 UTC m=+4585.543132668
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-383226 describe po kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-383226 describe po kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-vx2xv
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-383226/192.168.94.2
Start Time:       Mon, 29 Sep 2025 09:36:12 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d86rf (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-d86rf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m24s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv to old-k8s-version-383226
Warning  Failed     8m20s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m47s (x4 over 9m24s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     5m16s (x4 over 8m20s)  kubelet            Error: ErrImagePull
Warning  Failed     5m16s (x3 over 7m33s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m4s (x6 over 8m20s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4m24s (x9 over 8m20s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard: exit status 1 (76.149594ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-vx2xv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-383226
helpers_test.go:243: (dbg) docker inspect old-k8s-version-383226:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086",
	        "Created": "2025-09-29T09:34:38.375266388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:35:48.810030077Z",
	            "FinishedAt": "2025-09-29T09:35:47.766053108Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/hosts",
	        "LogPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086-json.log",
	        "Name": "/old-k8s-version-383226",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-383226:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-383226",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086",
	                "LowerDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-383226",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-383226/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-383226",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-383226",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-383226",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8779a5e7de68063ae1be898d629bdb7cebf5b9087119cf86a1ddb0929e88abac",
	            "SandboxKey": "/var/run/docker/netns/8779a5e7de68",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-383226": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:25:e4:c3:7f:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a63824d25a59469f34d03b2b3a3d3f9286340373bc3c74439b9e2ad87eb7dbfe",
	                    "EndpointID": "34d175306776277a1faba9493dc693e3567154d18d3dd5acb8dbb70128bd39b5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-383226",
	                        "7d2e6800721f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-383226 -n old-k8s-version-383226
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-383226 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-383226 logs -n 25: (1.21103541s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-646399 sudo crio config                                                                                                                                                                                                             │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:35 UTC │
	│ delete  │ -p bridge-646399                                                                                                                                                                                                                              │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p embed-certs-463478 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:44:16 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:16.467504435Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=419f407e-c33d-43c5-b93d-01da1176d827 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:19 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:19.468084999Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=044eb706-60b7-41ae-b4da-edcfd5849d44 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:19 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:19.468366134Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=044eb706-60b7-41ae-b4da-edcfd5849d44 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:27 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:27.467250128Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e2ace24a-5210-43ae-a0d7-c07724d97f99 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:27 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:27.467493585Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e2ace24a-5210-43ae-a0d7-c07724d97f99 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:30 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:30.467669984Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=41e5b14e-0efe-4ba5-af85-bfa944b91a7a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:30 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:30.468020550Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=41e5b14e-0efe-4ba5-af85-bfa944b91a7a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:42 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:42.467819768Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=688182d5-38a6-43ce-86c3-fc7abe48d371 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:42 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:42.468068070Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=aac3e2d1-1ae9-443c-80c2-2e95154e9c03 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:42 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:42.468209666Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=688182d5-38a6-43ce-86c3-fc7abe48d371 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:42 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:42.468337029Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=aac3e2d1-1ae9-443c-80c2-2e95154e9c03 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:53 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:53.468032184Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2f029b5e-3359-44e3-8a48-d4332591cff2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:53 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:53.468272564Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2f029b5e-3359-44e3-8a48-d4332591cff2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:54 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:54.467543880Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2ed2f0b7-c7dc-4561-a282-575ef7b85d34 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:44:54 old-k8s-version-383226 crio[557]: time="2025-09-29 09:44:54.467796230Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2ed2f0b7-c7dc-4561-a282-575ef7b85d34 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:06 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:06.468172169Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9ba0e5b6-b37b-416e-89eb-fac6a4f065d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:06 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:06.468483282Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9ba0e5b6-b37b-416e-89eb-fac6a4f065d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:06 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:06.469024549Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8155bda5-7c60-4982-b097-c2177a750b30 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:45:06 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:06.473527263Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 09:45:08 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:08.467084911Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ca375b06-bd0d-4939-8929-40685a5b7f32 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:08 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:08.467331221Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ca375b06-bd0d-4939-8929-40685a5b7f32 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:23 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:23.467899374Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=891e88a5-fe93-47a2-8a62-0b70b30f6570 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:23 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:23.468165605Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=891e88a5-fe93-47a2-8a62-0b70b30f6570 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:37 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:37.467510223Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9b465ef1-5231-4b51-8a45-76e68eae890f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:37 old-k8s-version-383226 crio[557]: time="2025-09-29 09:45:37.467728959Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9b465ef1-5231-4b51-8a45-76e68eae890f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1825f13639ffd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   40b2e581322fc       dashboard-metrics-scraper-5f989dc9cf-qwlrl
	d16e443ed650c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   965faa72c74e5       storage-provisioner
	cbc99b97a3843       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                     1                   3fa847a934caa       coredns-5dd5756b68-cwxnf
	fc0d0d64c4cd2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   93a0e999fb28b       busybox
	b44c7d38be7cf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   0e6a6d37467cd       kindnet-wz6rq
	b7730ad695c27       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   9 minutes ago       Running             kube-proxy                  1                   c9017a48e7b05       kube-proxy-g86rz
	0efde9fa2435d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   965faa72c74e5       storage-provisioner
	da45c5617ae88       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   9 minutes ago       Running             kube-apiserver              1                   37aea600c115c       kube-apiserver-old-k8s-version-383226
	32d9fda5cc39b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   9 minutes ago       Running             kube-scheduler              1                   37898ba1607f3       kube-scheduler-old-k8s-version-383226
	63b9f8f8d0ec0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   9 minutes ago       Running             kube-controller-manager     1                   5175803881e7b       kube-controller-manager-old-k8s-version-383226
	0d1a11e2d7b3f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                        1                   f2f8f4d736ebb       etcd-old-k8s-version-383226
	
	
	==> coredns [cbc99b97a384328da06f3312c734d7b8e538dcff484708c376e421f1ae89db34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55789 - 43854 "HINFO IN 5184595554245198180.5627202947282764251. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060921737s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-383226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-383226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=old-k8s-version-383226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_34_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-383226
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:45:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:41:37 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:41:37 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:41:37 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:41:37 +0000   Mon, 29 Sep 2025 09:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-383226
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0d427e2d6b420688a79baa17a6c956
	  System UUID:                63eb07de-01db-42f8-9240-9b88f7ef75f9
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-cwxnf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-383226                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-wz6rq                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-383226             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-383226    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-g86rz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-383226             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-56tsv                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qwlrl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vx2xv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m38s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-383226 event: Registered Node old-k8s-version-383226 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-383226 status is now: NodeReady
	  Normal  Starting                 9m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m43s (x8 over 9m43s)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m27s                  node-controller  Node old-k8s-version-383226 event: Registered Node old-k8s-version-383226 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [0d1a11e2d7b3fb658bc4fc710774f7c66a90df230859619c58f1873e32ee7a89] <==
	{"level":"info","ts":"2025-09-29T09:35:57.324715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T09:35:57.325063Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"dfc97eb0aae75b33","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-09-29T09:35:57.329232Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T09:35:57.329491Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T09:35:57.329532Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T09:35:57.329638Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T09:35:57.329646Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T09:35:57.689371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.68943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.689456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.689474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.69054Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-383226 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T09:35:57.690556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T09:35:57.690873Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T09:35:57.691242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T09:35:57.691273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T09:35:57.694548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-09-29T09:35:57.696531Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T09:36:05.694336Z","caller":"traceutil/trace.go:171","msg":"trace[986107393] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"108.827245ms","start":"2025-09-29T09:36:05.585489Z","end":"2025-09-29T09:36:05.694317Z","steps":["trace[986107393] 'process raft request'  (duration: 108.697529ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:36:59.390187Z","caller":"traceutil/trace.go:171","msg":"trace[424009884] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"139.542235ms","start":"2025-09-29T09:36:59.250628Z","end":"2025-09-29T09:36:59.39017Z","steps":["trace[424009884] 'process raft request'  (duration: 139.397986ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:59.574073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.140043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T09:36:59.574164Z","caller":"traceutil/trace.go:171","msg":"trace[975101957] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:707; }","duration":"140.248373ms","start":"2025-09-29T09:36:59.433899Z","end":"2025-09-29T09:36:59.574148Z","steps":["trace[975101957] 'range keys from in-memory index tree'  (duration: 140.05711ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:45:39 up  3:28,  0 users,  load average: 0.26, 1.00, 1.59
	Linux old-k8s-version-383226 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b44c7d38be7cff3ba699998aac743de0e4b4f31749e06739e8bb89aea0ff87a3] <==
	I0929 09:43:31.394239       1 main.go:301] handling current node
	I0929 09:43:41.400639       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:43:41.400667       1 main.go:301] handling current node
	I0929 09:43:51.397611       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:43:51.397644       1 main.go:301] handling current node
	I0929 09:44:01.399946       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:01.399975       1 main.go:301] handling current node
	I0929 09:44:11.397075       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:11.397108       1 main.go:301] handling current node
	I0929 09:44:21.396962       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:21.397000       1 main.go:301] handling current node
	I0929 09:44:31.400807       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:31.400847       1 main.go:301] handling current node
	I0929 09:44:41.399950       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:41.399988       1 main.go:301] handling current node
	I0929 09:44:51.397430       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:44:51.397458       1 main.go:301] handling current node
	I0929 09:45:01.399917       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:45:01.399957       1 main.go:301] handling current node
	I0929 09:45:11.392460       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:45:11.392513       1 main.go:301] handling current node
	I0929 09:45:21.396938       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:45:21.396988       1 main.go:301] handling current node
	I0929 09:45:31.400947       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:45:31.400979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [da45c5617ae88aa853d0e35427b7dc76ac2b9ebb0e4e1d666dc6db4eb7bd546e] <==
	E0929 09:41:00.546441       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:41:00.547557       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:41:59.467777       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:41:59.467807       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 09:42:00.546937       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:42:00.546976       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 09:42:00.546983       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:42:00.548112       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:42:00.548190       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:42:00.548224       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:42:59.467919       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:42:59.467941       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 09:43:59.467368       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:43:59.467390       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 09:44:00.547202       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:44:00.547240       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 09:44:00.547247       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:44:00.548345       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:44:00.548431       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:44:00.548442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:44:59.467875       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:44:59.467902       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [63b9f8f8d0ec019fbffeb3a52a7e634a9b34fe34ad87e1960e1c15d0282cc91d] <==
	I0929 09:40:54.477713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="123.897µs"
	E0929 09:41:12.711082       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:41:13.042473       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:41:42.715635       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:41:43.049962       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:41:54.288084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.904µs"
	I0929 09:42:03.288858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.396µs"
	E0929 09:42:12.720894       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:42:13.057335       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:42:36.477539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.327µs"
	E0929 09:42:42.725920       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:42:43.064611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:42:50.477309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.394µs"
	E0929 09:43:12.731204       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:43:13.071454       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:43:31.476874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="132.033µs"
	E0929 09:43:42.735759       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:43:43.078480       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:43:43.477347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="160.618µs"
	E0929 09:44:12.740264       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:44:13.085959       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:44:42.744902       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:44:43.093584       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:45:12.749162       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:45:13.100553       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b7730ad695c272feca810f40fdf6d89ecf76608c2946f871afe0f3bff90fa953] <==
	I0929 09:36:01.052478       1 server_others.go:69] "Using iptables proxy"
	I0929 09:36:01.069155       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0929 09:36:01.100068       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:36:01.103152       1 server_others.go:152] "Using iptables Proxier"
	I0929 09:36:01.103299       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 09:36:01.103334       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 09:36:01.103393       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 09:36:01.103771       1 server.go:846] "Version info" version="v1.28.0"
	I0929 09:36:01.104005       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:01.104885       1 config.go:188] "Starting service config controller"
	I0929 09:36:01.104981       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 09:36:01.105077       1 config.go:315] "Starting node config controller"
	I0929 09:36:01.105113       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 09:36:01.106303       1 config.go:97] "Starting endpoint slice config controller"
	I0929 09:36:01.106326       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 09:36:01.205872       1 shared_informer.go:318] Caches are synced for node config
	I0929 09:36:01.205890       1 shared_informer.go:318] Caches are synced for service config
	I0929 09:36:01.207021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32d9fda5cc39b4851a4b1de08738a1a3a7d5db22a27ccec955be87812975396a] <==
	I0929 09:35:58.805755       1 serving.go:348] Generated self-signed cert in-memory
	W0929 09:35:59.515764       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:35:59.515915       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 09:35:59.515941       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:35:59.515968       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:35:59.560380       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 09:35:59.560507       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:35:59.569522       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:35:59.569621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 09:35:59.573266       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 09:35:59.573368       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 09:35:59.670812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 09:44:24 old-k8s-version-383226 kubelet[699]: I0929 09:44:24.467327     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:44:24 old-k8s-version-383226 kubelet[699]: E0929 09:44:24.467792     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:44:27 old-k8s-version-383226 kubelet[699]: E0929 09:44:27.467770     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:44:30 old-k8s-version-383226 kubelet[699]: E0929 09:44:30.468309     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:44:38 old-k8s-version-383226 kubelet[699]: I0929 09:44:38.467768     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:44:38 old-k8s-version-383226 kubelet[699]: E0929 09:44:38.468174     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:44:42 old-k8s-version-383226 kubelet[699]: E0929 09:44:42.468476     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:44:42 old-k8s-version-383226 kubelet[699]: E0929 09:44:42.468563     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:44:53 old-k8s-version-383226 kubelet[699]: I0929 09:44:53.467617     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:44:53 old-k8s-version-383226 kubelet[699]: E0929 09:44:53.468000     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:44:53 old-k8s-version-383226 kubelet[699]: E0929 09:44:53.468539     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:44:54 old-k8s-version-383226 kubelet[699]: E0929 09:44:54.468082     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:45:04 old-k8s-version-383226 kubelet[699]: I0929 09:45:04.467918     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:45:04 old-k8s-version-383226 kubelet[699]: E0929 09:45:04.468335     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:45:08 old-k8s-version-383226 kubelet[699]: E0929 09:45:08.467594     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:45:19 old-k8s-version-383226 kubelet[699]: I0929 09:45:19.466730     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:45:19 old-k8s-version-383226 kubelet[699]: E0929 09:45:19.467065     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:45:23 old-k8s-version-383226 kubelet[699]: E0929 09:45:23.468491     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:45:31 old-k8s-version-383226 kubelet[699]: I0929 09:45:31.467205     699 scope.go:117] "RemoveContainer" containerID="1825f13639ffd1c406b52a25b15fcfd71cef0688e4053bd1e7344b99a162deee"
	Sep 29 09:45:31 old-k8s-version-383226 kubelet[699]: E0929 09:45:31.467491     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:45:37 old-k8s-version-383226 kubelet[699]: E0929 09:45:37.124443     699 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:45:37 old-k8s-version-383226 kubelet[699]: E0929 09:45:37.124496     699 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:45:37 old-k8s-version-383226 kubelet[699]: E0929 09:45:37.124606     699 kuberuntime_manager.go:1209] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d86rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTP
Headers:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-8694d4445c-vx2xv_kubernetes-dashboard(57bd21d6-20a9-46cb-bf7d-d51a2c29739e): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have re
ached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Sep 29 09:45:37 old-k8s-version-383226 kubelet[699]: E0929 09:45:37.124652     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:45:37 old-k8s-version-383226 kubelet[699]: E0929 09:45:37.468047     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	
	
	==> storage-provisioner [0efde9fa2435d78ffaf14f9f1ce132db8dc31af8adfabfd2e82ab66356107690] <==
	I0929 09:36:00.967969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:36:30.971245       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d16e443ed650cba7b6a82e26e02342c9a54d95c883ce25457b31bbdcee42b571] <==
	I0929 09:36:31.755823       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 09:36:31.770468       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 09:36:31.770552       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 09:36:49.169113       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 09:36:49.169203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f22a9583-c2b0-463c-8e07-b1d35b2e39b6", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b became leader
	I0929 09:36:49.169317       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b!
	I0929 09:36:49.269643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-383226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv: exit status 1 (60.396009ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-56tsv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-vx2xv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z8j9m" [ed919a2e-20ad-45ae-af2e-22135bc8c096] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:37:28.740357  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:33.327317  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:46:26.219218824 +0000 UTC m=+4633.865844269
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-463478 describe po kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-463478 describe po kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-z8j9m
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-463478/192.168.103.2
Start Time:       Mon, 29 Sep 2025 09:36:53 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kvvg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-6kvvg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m to embed-certs-463478
Warning  Failed     7m1s                    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m13s (x5 over 9m33s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m42s (x4 over 8m59s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m42s (x5 over 8m59s)   kubelet            Error: ErrImagePull
Warning  Failed     2m35s (x16 over 8m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    97s (x21 over 8m59s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard: exit status 1 (73.454123ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-z8j9m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-463478
helpers_test.go:243: (dbg) docker inspect embed-certs-463478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472",
	        "Created": "2025-09-29T09:35:29.199260963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 730547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:36:37.630678348Z",
	            "FinishedAt": "2025-09-29T09:36:36.782265682Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/hostname",
	        "HostsPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/hosts",
	        "LogPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472-json.log",
	        "Name": "/embed-certs-463478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-463478:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-463478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472",
	                "LowerDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-463478",
	                "Source": "/var/lib/docker/volumes/embed-certs-463478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-463478",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-463478",
	                "name.minikube.sigs.k8s.io": "embed-certs-463478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f0889acdf98616ba6207e0be28c4a7879350d4fb5132f8ec3f71dee8d95efea",
	            "SandboxKey": "/var/run/docker/netns/7f0889acdf98",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-463478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9d:20:51:16:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f47a716d00fa0ed0049865c866d39f32a13880b9157b9b533e4e9df61933299f",
	                    "EndpointID": "cbc0261eb6c9b48debb130f9cd55dc628df60a048726e90d4d7d19671a59d5fa",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-463478",
	                        "a44e8abd6d36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-463478 -n embed-certs-463478
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-463478 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-463478 logs -n 25: (1.239839838s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-646399 sudo crio config                                                                                                                                                                                                             │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:35 UTC │
	│ delete  │ -p bridge-646399                                                                                                                                                                                                                              │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p embed-certs-463478 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:44:58 embed-certs-463478 crio[560]: time="2025-09-29 09:44:58.405732730Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fb315717-0e9c-4abd-99ed-9dfab353102b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:00 embed-certs-463478 crio[560]: time="2025-09-29 09:45:00.406124971Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=586025d1-eec4-4dbc-8c9a-3635c10f9668 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:00 embed-certs-463478 crio[560]: time="2025-09-29 09:45:00.406473792Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=586025d1-eec4-4dbc-8c9a-3635c10f9668 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:10 embed-certs-463478 crio[560]: time="2025-09-29 09:45:10.406293889Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8f253c02-da34-4fbe-bd24-d7767e37a745 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:10 embed-certs-463478 crio[560]: time="2025-09-29 09:45:10.406526428Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8f253c02-da34-4fbe-bd24-d7767e37a745 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:12 embed-certs-463478 crio[560]: time="2025-09-29 09:45:12.406376925Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=995b9661-b158-4833-99b0-99b047732386 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:12 embed-certs-463478 crio[560]: time="2025-09-29 09:45:12.406642203Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=995b9661-b158-4833-99b0-99b047732386 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:23 embed-certs-463478 crio[560]: time="2025-09-29 09:45:23.406237530Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b0ac4998-0770-41f0-b754-77961fbc04ed name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:23 embed-certs-463478 crio[560]: time="2025-09-29 09:45:23.406513962Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b0ac4998-0770-41f0-b754-77961fbc04ed name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:24 embed-certs-463478 crio[560]: time="2025-09-29 09:45:24.405860394Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=cd6a680b-7240-4e01-9875-0c445401ecf7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:24 embed-certs-463478 crio[560]: time="2025-09-29 09:45:24.406159122Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=cd6a680b-7240-4e01-9875-0c445401ecf7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:24 embed-certs-463478 crio[560]: time="2025-09-29 09:45:24.406775975Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b7a40722-5abb-40b7-9fde-a2f1a18522f6 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:45:24 embed-certs-463478 crio[560]: time="2025-09-29 09:45:24.408383582Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 09:45:35 embed-certs-463478 crio[560]: time="2025-09-29 09:45:35.406240534Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=06a7f4ce-4473-4392-a6be-646edb98ed27 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:35 embed-certs-463478 crio[560]: time="2025-09-29 09:45:35.406513971Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=06a7f4ce-4473-4392-a6be-646edb98ed27 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:50 embed-certs-463478 crio[560]: time="2025-09-29 09:45:50.406087974Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=123fafa2-ab21-4651-9d01-ae4e39aabf56 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:50 embed-certs-463478 crio[560]: time="2025-09-29 09:45:50.406343841Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=123fafa2-ab21-4651-9d01-ae4e39aabf56 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:05 embed-certs-463478 crio[560]: time="2025-09-29 09:46:05.405843588Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ea94ff9a-c092-474d-b28c-f74b16e69ca9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:05 embed-certs-463478 crio[560]: time="2025-09-29 09:46:05.406113783Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ea94ff9a-c092-474d-b28c-f74b16e69ca9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:09 embed-certs-463478 crio[560]: time="2025-09-29 09:46:09.406029643Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9de6539f-389e-4d41-b9c5-6467f0657bf6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:09 embed-certs-463478 crio[560]: time="2025-09-29 09:46:09.406356169Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9de6539f-389e-4d41-b9c5-6467f0657bf6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:16 embed-certs-463478 crio[560]: time="2025-09-29 09:46:16.405773456Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6abc12a4-b6c9-4027-8ee1-ce7cec31b8b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:16 embed-certs-463478 crio[560]: time="2025-09-29 09:46:16.406019145Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6abc12a4-b6c9-4027-8ee1-ce7cec31b8b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:23 embed-certs-463478 crio[560]: time="2025-09-29 09:46:23.405599287Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bc3712e5-1e98-432b-92b0-7aaf7da3148d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:23 embed-certs-463478 crio[560]: time="2025-09-29 09:46:23.405935642Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=bc3712e5-1e98-432b-92b0-7aaf7da3148d name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ac4071b4dae1b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   0531764e1e569       dashboard-metrics-scraper-6ffb444bf9-cwq99
	21c186f4ce38f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   c464ef39d6367       storage-provisioner
	ecc81e8f7932b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   c683e79f1a94e       coredns-66bc5c9577-ng4bv
	b690bc729bd3a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   07829e3fd5c6d       kindnet-9nmlh
	039d40493c8bf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   412dbba55ab2e       busybox
	28dff9304995e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   9 minutes ago       Running             kube-proxy                  1                   1b6952857fdbd       kube-proxy-k4ld5
	d53267deead34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   c464ef39d6367       storage-provisioner
	d6f847bce3be4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 minutes ago       Running             kube-scheduler              1                   e9be904d3fd8a       kube-scheduler-embed-certs-463478
	3126402649a15       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   007a32fcff623       etcd-embed-certs-463478
	a69cb8e81f046       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 minutes ago       Running             kube-controller-manager     1                   9538c1f652fb8       kube-controller-manager-embed-certs-463478
	280012c3ca262       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 minutes ago       Running             kube-apiserver              1                   f8b9999252fb9       kube-apiserver-embed-certs-463478
	
	
	==> coredns [ecc81e8f7932b0e3feb6acfdd2c812326afd5178d9777c7b3d558e0d8d36a8f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47970 - 5425 "HINFO IN 974549403489356571.6881221070274710199. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.065518313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-463478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-463478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=embed-certs-463478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_35_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:35:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-463478
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:46:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:44:36 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:44:36 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:44:36 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:44:36 +0000   Mon, 29 Sep 2025 09:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-463478
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffa6578ce1684e6984ae5415f52f912e
	  System UUID:                72ae182d-f4a0-49ca-b485-ab8af5072e06
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-ng4bv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-463478                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-9nmlh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-463478             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-463478    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-k4ld5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-463478             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-skth6               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cwq99    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z8j9m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node embed-certs-463478 event: Registered Node embed-certs-463478 in Controller
	  Normal  NodeReady                10m                    kubelet          Node embed-certs-463478 status is now: NodeReady
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m42s (x8 over 9m42s)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x8 over 9m42s)  kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x8 over 9m42s)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m35s                  node-controller  Node embed-certs-463478 event: Registered Node embed-certs-463478 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [3126402649a15b187acca268b80334b4a484cf819155f77c981e6e7b90d59267] <==
	{"level":"warn","ts":"2025-09-29T09:36:47.394035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.402245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.410422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.418736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.438153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.453938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.462872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.471366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.485974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.502345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.511972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.519167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.532229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.539738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.547010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.612228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35010","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T09:36:58.486179Z","caller":"traceutil/trace.go:172","msg":"trace[1830524343] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"127.978568ms","start":"2025-09-29T09:36:58.358182Z","end":"2025-09-29T09:36:58.486160Z","steps":["trace[1830524343] 'process raft request'  (duration: 127.851067ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:58.684968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.513407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ng4bv\" limit:1 ","response":"range_response_count:1 size:5920"}
	{"level":"info","ts":"2025-09-29T09:36:58.685063Z","caller":"traceutil/trace.go:172","msg":"trace[1656099704] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ng4bv; range_end:; response_count:1; response_revision:653; }","duration":"121.635853ms","start":"2025-09-29T09:36:58.563409Z","end":"2025-09-29T09:36:58.685045Z","steps":["trace[1656099704] 'range keys from in-memory index tree'  (duration: 121.293943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:59.896783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.046829ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788991598031837 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" mod_revision:654 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T09:36:59.896922Z","caller":"traceutil/trace.go:172","msg":"trace[1390110800] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"290.039816ms","start":"2025-09-29T09:36:59.606867Z","end":"2025-09-29T09:36:59.896906Z","steps":["trace[1390110800] 'process raft request'  (duration: 155.273801ms)","trace[1390110800] 'compare'  (duration: 133.939622ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T09:37:00.186419Z","caller":"traceutil/trace.go:172","msg":"trace[507678109] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"122.947367ms","start":"2025-09-29T09:37:00.063450Z","end":"2025-09-29T09:37:00.186397Z","steps":["trace[507678109] 'read index received'  (duration: 122.938064ms)","trace[507678109] 'applied index is now lower than readState.Index'  (duration: 7.983µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T09:37:00.187099Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.614697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ng4bv\" limit:1 ","response":"range_response_count:1 size:5920"}
	{"level":"info","ts":"2025-09-29T09:37:00.187184Z","caller":"traceutil/trace.go:172","msg":"trace[1405870858] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ng4bv; range_end:; response_count:1; response_revision:658; }","duration":"123.729492ms","start":"2025-09-29T09:37:00.063438Z","end":"2025-09-29T09:37:00.187167Z","steps":["trace[1405870858] 'agreement among raft nodes before linearized reading'  (duration: 123.041051ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:37:00.187406Z","caller":"traceutil/trace.go:172","msg":"trace[1500197932] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"191.840705ms","start":"2025-09-29T09:36:59.995550Z","end":"2025-09-29T09:37:00.187390Z","steps":["trace[1500197932] 'process raft request'  (duration: 190.89814ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:46:27 up  3:28,  0 users,  load average: 0.38, 0.93, 1.53
	Linux embed-certs-463478 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b690bc729bd3adf650e3273ee975c81808254796a7fd4cb6050342571627d047] <==
	I0929 09:44:19.379278       1 main.go:301] handling current node
	I0929 09:44:29.378898       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:44:29.378930       1 main.go:301] handling current node
	I0929 09:44:39.380185       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:44:39.380224       1 main.go:301] handling current node
	I0929 09:44:49.387954       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:44:49.387994       1 main.go:301] handling current node
	I0929 09:44:59.385933       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:44:59.385987       1 main.go:301] handling current node
	I0929 09:45:09.379616       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:09.379646       1 main.go:301] handling current node
	I0929 09:45:19.379207       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:19.379260       1 main.go:301] handling current node
	I0929 09:45:29.383495       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:29.383548       1 main.go:301] handling current node
	I0929 09:45:39.379912       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:39.379942       1 main.go:301] handling current node
	I0929 09:45:49.387955       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:49.387993       1 main.go:301] handling current node
	I0929 09:45:59.382945       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:45:59.382976       1 main.go:301] handling current node
	I0929 09:46:09.379072       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:46:09.379127       1 main.go:301] handling current node
	I0929 09:46:19.379048       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:46:19.379084       1 main.go:301] handling current node
	
	
	==> kube-apiserver [280012c3ca2625ba7056bf2133ebda307036c512570fd14d7f3b31dcf4a119d2] <==
	E0929 09:41:49.181913       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0929 09:41:49.181912       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:41:49.181934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:41:49.183015       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:42:49.183082       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:42:49.183135       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:42:49.183154       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:42:49.183227       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:42:49.183280       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:42:49.185273       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:44:49.183873       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:44:49.183924       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:44:49.183944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:44:49.186074       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:44:49.186165       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:44:49.186187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a69cb8e81f04649b253d6861374bcc65c81b43d9fa6fc6723b93faf0ed100b7c] <==
	I0929 09:40:22.764039       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:40:52.729426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:40:52.772122       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:41:22.733396       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:41:22.779130       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:41:52.738107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:41:52.785933       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:22.741791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:22.792781       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:52.746535       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:52.800092       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:22.750593       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:22.807049       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:52.755078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:52.814196       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:22.759300       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:22.820955       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:52.763953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:52.827895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:22.768687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:22.834940       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:52.772706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:52.842486       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:46:22.776771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:46:22.849260       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28dff9304995e2646cdfd13d90c8aad5e138841b37a9d4d4feb0dfac16fd990b] <==
	I0929 09:36:49.024365       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:36:49.082075       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:36:49.183081       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:36:49.183184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 09:36:49.183289       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:36:49.206465       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:36:49.206611       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:36:49.214130       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:36:49.214505       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:36:49.214543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:49.216163       1 config.go:200] "Starting service config controller"
	I0929 09:36:49.216229       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:36:49.216260       1 config.go:309] "Starting node config controller"
	I0929 09:36:49.216307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:36:49.216329       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:36:49.216334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:36:49.216344       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:36:49.216353       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:36:49.316413       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 09:36:49.316447       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 09:36:49.316463       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:36:49.316714       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d6f847bce3be4dae3617fa06d4c956cfb0338ebbf44a8256d68cb664d24d04db] <==
	I0929 09:36:47.276898       1 serving.go:386] Generated self-signed cert in-memory
	I0929 09:36:48.603316       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:36:48.603346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:48.613728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 09:36:48.613770       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 09:36:48.613847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:36:48.613864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:36:48.613881       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.613888       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.614899       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:36:48.615042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:36:48.714857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.714990       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 09:36:48.715010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:45:47 embed-certs-463478 kubelet[697]: I0929 09:45:47.405925     697 scope.go:117] "RemoveContainer" containerID="ac4071b4dae1bc58b2f1af96e49b8a290b0f94915f22d70536878d6e33c87ee6"
	Sep 29 09:45:47 embed-certs-463478 kubelet[697]: E0929 09:45:47.406176     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:45:50 embed-certs-463478 kubelet[697]: E0929 09:45:50.406726     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.218511     697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.218579     697 kuberuntime_image.go:43] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.218672     697 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-z8j9m_kubernetes-dashboard(ed919a2e-20ad-45ae-af2e-22135bc8c096): ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.218714     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.465446     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139155465231255  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:45:55 embed-certs-463478 kubelet[697]: E0929 09:45:55.465481     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139155465231255  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:45:59 embed-certs-463478 kubelet[697]: I0929 09:45:59.405950     697 scope.go:117] "RemoveContainer" containerID="ac4071b4dae1bc58b2f1af96e49b8a290b0f94915f22d70536878d6e33c87ee6"
	Sep 29 09:45:59 embed-certs-463478 kubelet[697]: E0929 09:45:59.406181     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:46:05 embed-certs-463478 kubelet[697]: E0929 09:46:05.406413     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:46:05 embed-certs-463478 kubelet[697]: E0929 09:46:05.466587     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139165466336570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:46:05 embed-certs-463478 kubelet[697]: E0929 09:46:05.466623     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139165466336570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:46:09 embed-certs-463478 kubelet[697]: E0929 09:46:09.406675     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:46:12 embed-certs-463478 kubelet[697]: I0929 09:46:12.405210     697 scope.go:117] "RemoveContainer" containerID="ac4071b4dae1bc58b2f1af96e49b8a290b0f94915f22d70536878d6e33c87ee6"
	Sep 29 09:46:12 embed-certs-463478 kubelet[697]: E0929 09:46:12.405395     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:46:15 embed-certs-463478 kubelet[697]: E0929 09:46:15.467803     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139175467587150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:46:15 embed-certs-463478 kubelet[697]: E0929 09:46:15.468351     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139175467587150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:46:16 embed-certs-463478 kubelet[697]: E0929 09:46:16.406352     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:46:23 embed-certs-463478 kubelet[697]: E0929 09:46:23.406247     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:46:24 embed-certs-463478 kubelet[697]: I0929 09:46:24.405564     697 scope.go:117] "RemoveContainer" containerID="ac4071b4dae1bc58b2f1af96e49b8a290b0f94915f22d70536878d6e33c87ee6"
	Sep 29 09:46:24 embed-certs-463478 kubelet[697]: E0929 09:46:24.405925     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:46:25 embed-certs-463478 kubelet[697]: E0929 09:46:25.469518     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139185469289727  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:46:25 embed-certs-463478 kubelet[697]: E0929 09:46:25.469559     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139185469289727  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	
	
	==> storage-provisioner [21c186f4ce38f6efbee0a9497cb3d49ff85ea8b2e21eeef27cbd9b97465b1c4e] <==
	W0929 09:46:03.014539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:05.017308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:05.023299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:07.027009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:07.030787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:09.034231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:09.038386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:11.042096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:11.046008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:13.049726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:13.055116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:15.058055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:15.062400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:17.065489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:17.069622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:19.072781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:19.077003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:21.080408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:21.084538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:23.087630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:23.091729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:25.094689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:25.100012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:27.103730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:27.108996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d53267deead34818cb1d3f144a0f60ec9c2e3038891942d6241709ff777a2486] <==
	I0929 09:36:48.960298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:37:18.965205       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-463478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m: exit status 1 (61.907642ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-skth6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-z8j9m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8kf7" [5cf5352a-bd50-49be-812d-0483e26398c0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:38:17.092691  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.099101  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.110469  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.131876  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.173238  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.254676  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.416211  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:17.738389  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:18.380062  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:19.661936  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:22.224048  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:27.346215  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:34.770983  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:37.588107  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:40.426362  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.261239  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.267626  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.279002  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.300389  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.341772  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.423231  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.584738  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:50.906093  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:51.549310  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:52.831296  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:47:11.377678244 +0000 UTC m=+4679.024303696
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-730717 describe po kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-730717 describe po kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-d8kf7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-730717/192.168.76.2
Start Time:       Mon, 29 Sep 2025 09:37:30 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jrz2z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-jrz2z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m41s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7 to no-preload-730717
Normal   Pulling    4m48s (x5 over 9m40s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m18s (x5 over 9m10s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m18s (x5 over 9m10s)  kubelet            Error: ErrImagePull
Warning  Failed     2m57s (x16 over 9m9s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    109s (x21 over 9m9s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard: exit status 1 (73.546086ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-d8kf7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-730717
helpers_test.go:243: (dbg) docker inspect no-preload-730717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486",
	        "Created": "2025-09-29T09:35:52.393159276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 740038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:37:17.124718Z",
	            "FinishedAt": "2025-09-29T09:37:16.3014353Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/hostname",
	        "HostsPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/hosts",
	        "LogPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486-json.log",
	        "Name": "/no-preload-730717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-730717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-730717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486",
	                "LowerDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-730717",
	                "Source": "/var/lib/docker/volumes/no-preload-730717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-730717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-730717",
	                "name.minikube.sigs.k8s.io": "no-preload-730717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52711fe838b0d0f85826aa630431aa0175c6ce826754c3b8e97871aa8a75a141",
	            "SandboxKey": "/var/run/docker/netns/52711fe838b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-730717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:23:a0:fc:0a:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d924499e4b51e9f2d7e1ded72ae4f935ea286dd164b95d482bfdb7bef2c79707",
	                    "EndpointID": "bfbc4b7c48c4645a3dae999303c53fbe66f702e7f33d480f1ff9f332009aaf21",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-730717",
	                        "f03b3d6dc029"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-730717 -n no-preload-730717
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-730717 logs -n 25
E0929 09:47:12.828874  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-730717 logs -n 25: (1.251500974s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-646399 sudo crio config                                                                                                                                                                                                             │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:35 UTC │
	│ delete  │ -p bridge-646399                                                                                                                                                                                                                              │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p embed-certs-463478 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:45:36 no-preload-730717 crio[562]: time="2025-09-29 09:45:36.095305779Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 09:45:42 no-preload-730717 crio[562]: time="2025-09-29 09:45:42.094244232Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8171578e-66bd-4ba0-8361-d7d7b488490c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:42 no-preload-730717 crio[562]: time="2025-09-29 09:45:42.094452305Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8171578e-66bd-4ba0-8361-d7d7b488490c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:53 no-preload-730717 crio[562]: time="2025-09-29 09:45:53.093217920Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9a9bd51d-91a6-4318-96ba-dc8f860a2efe name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:45:53 no-preload-730717 crio[562]: time="2025-09-29 09:45:53.093512413Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9a9bd51d-91a6-4318-96ba-dc8f860a2efe name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:04 no-preload-730717 crio[562]: time="2025-09-29 09:46:04.093223157Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1acba9f9-66c2-45f9-b429-de45d327726d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:04 no-preload-730717 crio[562]: time="2025-09-29 09:46:04.093584563Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1acba9f9-66c2-45f9-b429-de45d327726d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:15 no-preload-730717 crio[562]: time="2025-09-29 09:46:15.092456054Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e77ed401-31dc-4d35-8938-88f2cb876966 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:15 no-preload-730717 crio[562]: time="2025-09-29 09:46:15.092750385Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e77ed401-31dc-4d35-8938-88f2cb876966 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:22 no-preload-730717 crio[562]: time="2025-09-29 09:46:22.093408772Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e7994408-9b20-49b0-88ef-2a7b1a7c08df name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:22 no-preload-730717 crio[562]: time="2025-09-29 09:46:22.093728316Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e7994408-9b20-49b0-88ef-2a7b1a7c08df name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:28 no-preload-730717 crio[562]: time="2025-09-29 09:46:28.093014643Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8b56a637-91a8-47c7-9adf-1efcf4d6b6c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:28 no-preload-730717 crio[562]: time="2025-09-29 09:46:28.093313822Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8b56a637-91a8-47c7-9adf-1efcf4d6b6c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:34 no-preload-730717 crio[562]: time="2025-09-29 09:46:34.093446383Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=db4ae5a5-e496-4d87-8908-cfe3451a73b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:34 no-preload-730717 crio[562]: time="2025-09-29 09:46:34.093776395Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=db4ae5a5-e496-4d87-8908-cfe3451a73b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:42 no-preload-730717 crio[562]: time="2025-09-29 09:46:42.092554008Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e532bad5-f8e8-41a5-9d09-c98692fa821f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:42 no-preload-730717 crio[562]: time="2025-09-29 09:46:42.092876778Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e532bad5-f8e8-41a5-9d09-c98692fa821f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:47 no-preload-730717 crio[562]: time="2025-09-29 09:46:47.092301770Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=af93139a-fad2-40c6-97de-da4f91edb761 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:47 no-preload-730717 crio[562]: time="2025-09-29 09:46:47.092542450Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=af93139a-fad2-40c6-97de-da4f91edb761 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:56 no-preload-730717 crio[562]: time="2025-09-29 09:46:56.092738609Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=38f714f4-f441-40ab-8692-1a8540fa061d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:56 no-preload-730717 crio[562]: time="2025-09-29 09:46:56.093057852Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=38f714f4-f441-40ab-8692-1a8540fa061d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:58 no-preload-730717 crio[562]: time="2025-09-29 09:46:58.093146539Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a2c63ab6-758b-46a0-a7b2-3bec989580bc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:58 no-preload-730717 crio[562]: time="2025-09-29 09:46:58.093478281Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a2c63ab6-758b-46a0-a7b2-3bec989580bc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:09 no-preload-730717 crio[562]: time="2025-09-29 09:47:09.092922026Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ad6d145c-253a-4dc1-8f56-b424c3926fe2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:09 no-preload-730717 crio[562]: time="2025-09-29 09:47:09.093198739Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ad6d145c-253a-4dc1-8f56-b424c3926fe2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6826e0847c7ff       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   46f2804480b20       dashboard-metrics-scraper-6ffb444bf9-vrtpm
	bdf81a55b041d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   a10dc97aa6f13       storage-provisioner
	b42daf67456ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   329fa422a72e3       coredns-66bc5c9577-ncwp4
	2525216b46e99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   dc691a9058172       busybox
	9ac0db1de5c9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   a10dc97aa6f13       storage-provisioner
	b322c8a93a311       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   9 minutes ago       Running             kube-proxy                  1                   da67f24f8ba06       kube-proxy-4bmgw
	eed7243881418       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   34b363fa78c75       kindnet-97tnr
	1d1678bd6daae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 minutes ago       Running             kube-apiserver              1                   9b42da1f49df5       kube-apiserver-no-preload-730717
	8c5200e560089       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 minutes ago       Running             kube-controller-manager     1                   acc1f14dd813e       kube-controller-manager-no-preload-730717
	2dfa3eec550c6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 minutes ago       Running             kube-scheduler              1                   92c6e99773f89       kube-scheduler-no-preload-730717
	9a7e8ebe2c7f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   a087c6efc4edc       etcd-no-preload-730717
	
	
	==> coredns [b42daf67456ad57382aaa4b3197eceb499c4f8125ab0d76af7df60ce5d3ca961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52547 - 5251 "HINFO IN 3276868380242433564.5868470022607830145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.474034407s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-730717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-730717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=no-preload-730717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_36_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:36:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-730717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:46:36 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:46:36 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:46:36 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:46:36 +0000   Mon, 29 Sep 2025 09:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-730717
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 5521e6bf6c6b43289a49004b78ac9a1f
	  System UUID:                cf880771-51a4-4a5c-81a6-14d707678d39
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-ncwp4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-730717                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-97tnr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-730717              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-730717     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4bmgw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-730717              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-42r64               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vrtpm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d8kf7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node no-preload-730717 event: Registered Node no-preload-730717 in Controller
	  Normal  NodeReady                10m                    kubelet          Node no-preload-730717 status is now: NodeReady
	  Normal  Starting                 9m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x8 over 9m48s)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m42s                  node-controller  Node no-preload-730717 event: Registered Node no-preload-730717 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [9a7e8ebe2c7f8235a975702327b3918be43c56992c94e1e2d62e3a60dacdf738] <==
	{"level":"warn","ts":"2025-09-29T09:37:25.956587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:25.964051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:25.973688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:25.980490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:25.987007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:25.996139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.002953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.010006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.016483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.023363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.030942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.046768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.055141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.062307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.068683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.074983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.082001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.089063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.096641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.104587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.111363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.126053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.133101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.140139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.183884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:47:12 up  3:29,  0 users,  load average: 0.28, 0.83, 1.47
	Linux no-preload-730717 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [eed72438814186b709516f48ea9db82d0175fe6211916cae17d158915dc933a9] <==
	I0929 09:45:07.974569       1 main.go:301] handling current node
	I0929 09:45:17.974915       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:45:17.974948       1 main.go:301] handling current node
	I0929 09:45:27.967271       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:45:27.967304       1 main.go:301] handling current node
	I0929 09:45:37.974313       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:45:37.974352       1 main.go:301] handling current node
	I0929 09:45:47.971923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:45:47.971960       1 main.go:301] handling current node
	I0929 09:45:57.967907       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:45:57.967964       1 main.go:301] handling current node
	I0929 09:46:07.975224       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:07.975266       1 main.go:301] handling current node
	I0929 09:46:17.975936       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:17.975968       1 main.go:301] handling current node
	I0929 09:46:27.967915       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:27.967978       1 main.go:301] handling current node
	I0929 09:46:37.967907       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:37.967948       1 main.go:301] handling current node
	I0929 09:46:47.971468       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:47.971499       1 main.go:301] handling current node
	I0929 09:46:57.967957       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:46:57.967990       1 main.go:301] handling current node
	I0929 09:47:07.970944       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:47:07.970978       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d1678bd6daaee7593cf308b3b04fde00b41f17a7641d4cfd2833778f925bfc1] <==
	E0929 09:42:27.649771       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:42:27.649789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:42:27.649847       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:42:27.651007       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:43:27.650394       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:43:27.650447       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:43:27.650462       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:43:27.651502       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:43:27.651559       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:43:27.651574       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:45:27.651262       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:45:27.651318       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:45:27.651338       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:45:27.652415       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:45:27.652497       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:45:27.652514       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8c5200e560089994a461092d833964ba4100be86716c520527a31816beee515c] <==
	I0929 09:41:00.125037       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:41:30.094598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:41:30.132448       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:00.098948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:00.140016       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:30.102941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:30.146917       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:00.106705       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:00.154300       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:30.111414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:30.162105       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:00.115415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:00.170440       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:30.120007       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:30.176582       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:00.124258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:00.183337       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:30.129285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:30.190361       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:46:00.133649       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:46:00.197660       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:46:30.138984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:46:30.204420       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:47:00.143028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:47:00.211380       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b322c8a93a311d0675ba0aa0a333a4ca0b835a54321e9b9203627668790dd927] <==
	I0929 09:37:27.609505       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:37:27.667898       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:37:27.768685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:37:27.768724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 09:37:27.768881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:37:27.788888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:37:27.788954       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:37:27.793942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:37:27.794420       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:37:27.794463       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:37:27.795689       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:37:27.795703       1 config.go:200] "Starting service config controller"
	I0929 09:37:27.795718       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:37:27.795718       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:37:27.795802       1 config.go:309] "Starting node config controller"
	I0929 09:37:27.795822       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:37:27.795846       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:37:27.795864       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:37:27.795902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:37:27.895906       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 09:37:27.897093       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 09:37:27.897145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2dfa3eec550c6076517250ed12f57707e7490bb65f701309138b1198d6e23007] <==
	I0929 09:37:25.635230       1 serving.go:386] Generated self-signed cert in-memory
	W0929 09:37:26.593585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:37:26.593618       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 09:37:26.593631       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:37:26.593640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:37:26.649547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:37:26.649653       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:37:26.652483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:37:26.652575       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:37:26.653668       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:37:26.653799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:37:26.752744       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:46:24 no-preload-730717 kubelet[699]: E0929 09:46:24.158957     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139184158706183  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:24 no-preload-730717 kubelet[699]: E0929 09:46:24.159004     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139184158706183  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:28 no-preload-730717 kubelet[699]: E0929 09:46:28.093660     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:46:29 no-preload-730717 kubelet[699]: I0929 09:46:29.092165     699 scope.go:117] "RemoveContainer" containerID="6826e0847c7ffd47993412e26af1be5a15897fd541d18aa0e807864ef859f39d"
	Sep 29 09:46:29 no-preload-730717 kubelet[699]: E0929 09:46:29.092349     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:46:34 no-preload-730717 kubelet[699]: E0929 09:46:34.094142     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:46:34 no-preload-730717 kubelet[699]: E0929 09:46:34.160217     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139194159977576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:34 no-preload-730717 kubelet[699]: E0929 09:46:34.160249     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139194159977576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:42 no-preload-730717 kubelet[699]: E0929 09:46:42.093213     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:46:43 no-preload-730717 kubelet[699]: I0929 09:46:43.092035     699 scope.go:117] "RemoveContainer" containerID="6826e0847c7ffd47993412e26af1be5a15897fd541d18aa0e807864ef859f39d"
	Sep 29 09:46:43 no-preload-730717 kubelet[699]: E0929 09:46:43.092220     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:46:44 no-preload-730717 kubelet[699]: E0929 09:46:44.161407     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139204161092648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:44 no-preload-730717 kubelet[699]: E0929 09:46:44.161443     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139204161092648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:47 no-preload-730717 kubelet[699]: E0929 09:46:47.092928     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:46:54 no-preload-730717 kubelet[699]: E0929 09:46:54.162735     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139214162383782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:54 no-preload-730717 kubelet[699]: E0929 09:46:54.162775     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139214162383782  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:46:56 no-preload-730717 kubelet[699]: E0929 09:46:56.093362     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:46:58 no-preload-730717 kubelet[699]: I0929 09:46:58.092599     699 scope.go:117] "RemoveContainer" containerID="6826e0847c7ffd47993412e26af1be5a15897fd541d18aa0e807864ef859f39d"
	Sep 29 09:46:58 no-preload-730717 kubelet[699]: E0929 09:46:58.092859     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:46:58 no-preload-730717 kubelet[699]: E0929 09:46:58.093716     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:47:04 no-preload-730717 kubelet[699]: E0929 09:47:04.164353     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139224164060066  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:47:04 no-preload-730717 kubelet[699]: E0929 09:47:04.164390     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139224164060066  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:47:09 no-preload-730717 kubelet[699]: E0929 09:47:09.093543     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:47:11 no-preload-730717 kubelet[699]: I0929 09:47:11.092381     699 scope.go:117] "RemoveContainer" containerID="6826e0847c7ffd47993412e26af1be5a15897fd541d18aa0e807864ef859f39d"
	Sep 29 09:47:11 no-preload-730717 kubelet[699]: E0929 09:47:11.092612     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	
	
	==> storage-provisioner [9ac0db1de5c9e7283faca5cac820b11ebe6eadf9130f1232f27003dd62509583] <==
	I0929 09:37:27.555975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:37:57.561267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bdf81a55b041d54c3b595f42b184b64b1725bce0a2b90db23eb7fd721aa16cab] <==
	W0929 09:46:47.636009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:49.639608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:49.644449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:51.648461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:51.652252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:53.655161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:53.660036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:55.663075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:55.666945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:57.670228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:57.673962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:59.677318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:46:59.681568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:01.685441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:01.690020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:03.693219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:03.697167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:05.700474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:05.705299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:07.707906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:07.711986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:09.715402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:09.720366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:11.724356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:11.728608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-730717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7: exit status 1 (61.987228ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-42r64" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-d8kf7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qghq7" [d0d73ee5-b7eb-4f95-a577-03315e1c1e0a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:38:55.393212  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:38:58.070324  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:00.515433  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.699003  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.705362  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.716756  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.738169  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.779975  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:01.861411  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:02.023021  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:02.344873  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:02.987181  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:04.268670  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:06.830700  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:10.757425  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:11.952568  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:22.193935  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:31.238767  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:39.031999  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:42.675457  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.443089  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.449485  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.460920  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.482336  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.523806  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.605954  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:47.767999  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:48.089533  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:48.731273  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:50.012942  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:52.574348  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:56.692934  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:39:57.696368  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:02.348430  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:07.938468  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:12.200445  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:19.892689  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:23.637412  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:28.419879  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.301878  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.308244  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.319660  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.341027  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.382424  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.463905  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.625353  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:29.947084  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:30.589052  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:31.871291  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:34.433058  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:39.554412  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:49.795749  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:40:58.770375  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:00.954259  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:09.381933  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:10.278038  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:15.699909  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:34.122597  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:45.559065  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:41:51.239404  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:42:12.829125  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:42:18.486050  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:42:31.303996  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:42:40.534907  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:42:46.190304  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:43:13.161063  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:43:17.092529  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:43:22.960682  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:43:44.795972  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:43:50.261470  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:44:01.698165  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:44:17.964381  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:44:29.400990  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:44:47.443172  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:45:15.145987  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:45:19.892950  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:45:29.302022  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:47:55.071718219 +0000 UTC m=+4722.718343676
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe po kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-547715 describe po kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-qghq7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-547715/192.168.85.2
Start Time:       Mon, 29 Sep 2025 09:38:16 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zr2k8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-zr2k8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                    From               Message
----     ------       ----                   ----               -------
Normal   Scheduled    9m38s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7 to default-k8s-diff-port-547715
Warning  FailedMount  9m39s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-zr2k8" : configmap "kube-root-ca.crt" not found
Warning  Failed       7m25s (x3 over 9m5s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling      4m31s (x5 over 9m38s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       4m (x5 over 9m5s)      kubelet            Error: ErrImagePull
Warning  Failed       4m (x2 over 6m)        kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       3m9s (x15 over 9m4s)   kubelet            Error: ImagePullBackOff
Normal   BackOff      102s (x21 over 9m4s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard: exit status 1 (75.023239ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-qghq7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-547715
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-547715:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39",
	        "Created": "2025-09-29T09:37:00.383172067Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 744659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:38:02.888389154Z",
	            "FinishedAt": "2025-09-29T09:38:01.958756731Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/hostname",
	        "HostsPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/hosts",
	        "LogPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39-json.log",
	        "Name": "/default-k8s-diff-port-547715",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-547715:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-547715",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39",
	                "LowerDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-547715",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-547715/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-547715",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-547715",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-547715",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "430c0c64e53b0585256ff4ae33923900e2b772a28a10909c57aa7cf6d4fa82c7",
	            "SandboxKey": "/var/run/docker/netns/430c0c64e53b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-547715": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d7:da:af:f9:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3c300966847ea321243189f9b85a3983ffa1be9c8e7a6f7878f542b39ea8eee5",
	                    "EndpointID": "38c98ff51ecb0df328c367ed9f76471369322141671140e922de0e3e1bce97d9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-547715",
	                        "0eca4c191c94"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-547715 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-547715 logs -n 25: (1.203191541s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-646399 sudo crio config                                                                                                                                                                                                             │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:35 UTC │
	│ delete  │ -p bridge-646399                                                                                                                                                                                                                              │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p embed-certs-463478 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:46:25 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:25.382139150Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=dacaaf54-939b-428a-abc0-b747b2d2365d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:37 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:37.382127552Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0f594e44-6547-4744-ad85-cfd47d54a561 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:37 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:37.382409384Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0f594e44-6547-4744-ad85-cfd47d54a561 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:38.381780269Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d5da4763-5954-4e40-81c3-a8a044c7efd2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:38.382096536Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d5da4763-5954-4e40-81c3-a8a044c7efd2 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:38.382660608Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e05224d1-910e-4be2-8628-fc51c516cbe1 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:46:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:38.398675303Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 09:46:49 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:49.381391822Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=52b751d1-f7b1-4ea2-8dd7-41fdf57ff50f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:46:49 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:46:49.381623563Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=52b751d1-f7b1-4ea2-8dd7-41fdf57ff50f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:00 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:00.381823542Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9089c2b3-a0d6-415e-a62c-aa5533e367b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:00 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:00.382132428Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9089c2b3-a0d6-415e-a62c-aa5533e367b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:11 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:11.381725143Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fe9d9c5d-b9df-4a94-8cfe-ee66d62f96da name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:11 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:11.381999115Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fe9d9c5d-b9df-4a94-8cfe-ee66d62f96da name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:20 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:20.381936022Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=da8af64d-7111-440e-b02a-6dbfc2b8d212 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:20 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:20.382269163Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=da8af64d-7111-440e-b02a-6dbfc2b8d212 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:26 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:26.381589598Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2c9888df-6ebe-43c2-a297-d11f53c67e9e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:26 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:26.381932472Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2c9888df-6ebe-43c2-a297-d11f53c67e9e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:33 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:33.381394040Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=caeef217-ee22-4255-b3b2-de723080941e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:33 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:33.381721758Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=caeef217-ee22-4255-b3b2-de723080941e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:37 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:37.381912201Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=03030cad-22a2-41df-83dc-53980526d12a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:37 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:37.382115873Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=03030cad-22a2-41df-83dc-53980526d12a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:45 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:45.382029682Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f7b59e26-cc97-45c5-87f2-f5daadb4b063 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:45 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:45.382340205Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f7b59e26-cc97-45c5-87f2-f5daadb4b063 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:52 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:52.381798085Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=62b58c25-6fcf-4166-89d9-8d410c191951 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:47:52 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:47:52.382087436Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=62b58c25-6fcf-4166-89d9-8d410c191951 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	39aef6a475277       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   578491b028ba5       dashboard-metrics-scraper-6ffb444bf9-dtdv9
	0766e166e039f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   a7999b6883608       storage-provisioner
	282e1eb9eb159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   a7999b6883608       storage-provisioner
	70ab7f2e8b6b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   a74cb12ec9d60       kindnet-z4khf
	a47379e268889       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   9 minutes ago       Running             kube-proxy                  1                   85c62924ae93b       kube-proxy-tklgn
	6c9e0e8b13ca0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   016129f11f4d9       busybox
	a6f58acf91e8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   7e3fdbc819f2d       coredns-66bc5c9577-szmnf
	c9d70defb42b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 minutes ago       Running             kube-controller-manager     1                   6ef35ee579036       kube-controller-manager-default-k8s-diff-port-547715
	8722901e90377       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   3088452426d15       etcd-default-k8s-diff-port-547715
	c22423ef78077       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 minutes ago       Running             kube-scheduler              1                   ff1dce97f103e       kube-scheduler-default-k8s-diff-port-547715
	08e72b4f4dd8f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 minutes ago       Running             kube-apiserver              1                   677868a092b75       kube-apiserver-default-k8s-diff-port-547715
	
	
	==> coredns [a6f58acf91e8c557df13d6f3b1c4d00d883fa9cb0aa3a69b6ade22bdc2b28a85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40022 - 54057 "HINFO IN 2772210620304821818.450426464391418620. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.508985837s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-547715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-547715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=default-k8s-diff-port-547715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-547715
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:47:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:43:59 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:43:59 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:43:59 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:43:59 +0000   Mon, 29 Sep 2025 09:37:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-547715
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 4af6616ddbe04b1cbf75fc7b220ec352
	  System UUID:                31732521-1976-40d5-9acb-3d42efd87ef5
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-szmnf                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-547715                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-z4khf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-547715             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-547715    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-tklgn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-547715             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-lh9zv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dtdv9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qghq7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m42s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-547715 event: Registered Node default-k8s-diff-port-547715 in Controller
	  Normal  NodeReady                10m                    kubelet          Node default-k8s-diff-port-547715 status is now: NodeReady
	  Normal  Starting                 9m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m40s                  node-controller  Node default-k8s-diff-port-547715 event: Registered Node default-k8s-diff-port-547715 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [8722901e903773ba6c1b9b5c28a8383e30f3def513e7ad9bee0cfe8009efc6b5] <==
	{"level":"warn","ts":"2025-09-29T09:38:12.236341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.244456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.250656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.256994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.263030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.269342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.275882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.282763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.291895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.298974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.305226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.312289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.319152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.325353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.332891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.339630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.347050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.354204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.360756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.368343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.374861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.388331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.395270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.402241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.452257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:47:56 up  3:30,  0 users,  load average: 0.39, 0.78, 1.42
	Linux default-k8s-diff-port-547715 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [70ab7f2e8b6b84aa37a04955ea2785244f756f8668fb64a5a78ea9bcd3e77081] <==
	I0929 09:45:54.456982       1 main.go:301] handling current node
	I0929 09:46:04.460555       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:04.460591       1 main.go:301] handling current node
	I0929 09:46:14.465645       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:14.465677       1 main.go:301] handling current node
	I0929 09:46:24.457121       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:24.457151       1 main.go:301] handling current node
	I0929 09:46:34.465974       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:34.466019       1 main.go:301] handling current node
	I0929 09:46:44.462767       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:44.462797       1 main.go:301] handling current node
	I0929 09:46:54.457119       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:46:54.457170       1 main.go:301] handling current node
	I0929 09:47:04.457068       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:04.457141       1 main.go:301] handling current node
	I0929 09:47:14.463988       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:14.464021       1 main.go:301] handling current node
	I0929 09:47:24.460134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:24.460179       1 main.go:301] handling current node
	I0929 09:47:34.457015       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:34.457072       1 main.go:301] handling current node
	I0929 09:47:44.465469       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:44.465499       1 main.go:301] handling current node
	I0929 09:47:54.458396       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:47:54.458428       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08e72b4f4dd8fd8797c4e2563f468f51c972eede4a8dc3bdcba373efd8b0050e] <==
	E0929 09:43:13.856070       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:43:13.856081       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:43:13.856082       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:43:13.857207       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:44:13.856725       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:44:13.856825       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:44:13.856859       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:44:13.857802       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:44:13.857842       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:44:13.857864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:46:13.858056       1 handler_proxy.go:99] no RequestInfo found in the context
	W0929 09:46:13.858061       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:46:13.858141       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:46:13.858157       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:46:13.858163       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:46:13.859236       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c9d70defb42b6e720cfd1a1950b64416aea93b0f960ed6cb8d3001ef3db070f0] <==
	I0929 09:41:46.333023       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:16.299141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:16.339895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:42:46.303940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:42:46.347067       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:16.308859       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:16.354080       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:43:46.313390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:43:46.361517       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:16.317746       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:16.368078       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:44:46.322273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:44:46.375571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:16.327371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:16.382390       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:45:46.331713       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:45:46.389445       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:46:16.336415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:46:16.396089       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:46:46.340863       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:46:46.403081       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:47:16.345494       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:47:16.409810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:47:46.349384       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:47:46.416611       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a47379e268889af5f827113214e1ef4563e0a019658984b85108534407ffeebe] <==
	I0929 09:38:14.145734       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:38:14.210575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:38:14.311453       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:38:14.311494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 09:38:14.311599       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:38:14.329358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:38:14.329407       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:38:14.334585       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:38:14.335126       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:38:14.335160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:38:14.336746       1 config.go:200] "Starting service config controller"
	I0929 09:38:14.336772       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:38:14.336803       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:38:14.336808       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:38:14.336996       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:38:14.337019       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:38:14.337051       1 config.go:309] "Starting node config controller"
	I0929 09:38:14.337066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:38:14.337073       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:38:14.437396       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 09:38:14.437394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 09:38:14.437594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c22423ef78077ac2bf7ffed8f5b51a4238c30f39b0767c047837122c5b00b85f] <==
	I0929 09:38:11.781432       1 serving.go:386] Generated self-signed cert in-memory
	W0929 09:38:12.844960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:38:12.844999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0929 09:38:12.845012       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:38:12.845021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:38:12.877438       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:38:12.877600       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:38:12.880984       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:38:12.881043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:38:12.881280       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:38:12.881350       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:38:12.981267       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:47:10 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:10.441145     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139230440926921  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:10 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:10.441189     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139230440926921  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:11 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:11.382396     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:47:12 default-k8s-diff-port-547715 kubelet[696]: I0929 09:47:12.381408     696 scope.go:117] "RemoveContainer" containerID="39aef6a475277fab263867249e200348397051dcd42e5bf9f4067e0c13c760d5"
	Sep 29 09:47:12 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:12.381672     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:47:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:20.382630     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:47:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:20.442396     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139240442135059  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:20.442432     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139240442135059  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:26 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:26.382241     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:47:27 default-k8s-diff-port-547715 kubelet[696]: I0929 09:47:27.381474     696 scope.go:117] "RemoveContainer" containerID="39aef6a475277fab263867249e200348397051dcd42e5bf9f4067e0c13c760d5"
	Sep 29 09:47:27 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:27.381655     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:47:30 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:30.443735     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139250443488390  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:30 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:30.443771     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139250443488390  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:33 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:33.382149     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:47:37 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:37.382468     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:47:38 default-k8s-diff-port-547715 kubelet[696]: I0929 09:47:38.381324     696 scope.go:117] "RemoveContainer" containerID="39aef6a475277fab263867249e200348397051dcd42e5bf9f4067e0c13c760d5"
	Sep 29 09:47:38 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:38.381580     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:47:40 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:40.444941     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139260444708417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:40 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:40.444978     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139260444708417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:45 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:45.382718     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:47:49 default-k8s-diff-port-547715 kubelet[696]: I0929 09:47:49.381169     696 scope.go:117] "RemoveContainer" containerID="39aef6a475277fab263867249e200348397051dcd42e5bf9f4067e0c13c760d5"
	Sep 29 09:47:49 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:49.381513     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:47:50 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:50.446206     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139270445981302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:50 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:50.446250     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139270445981302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:47:52 default-k8s-diff-port-547715 kubelet[696]: E0929 09:47:52.382375     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	
	
	==> storage-provisioner [0766e166e039f4881db0b03fbcd149d9896c1040d1a3696faf2a928ae77a406b] <==
	W0929 09:47:31.936902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:33.940682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:33.945934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:35.948757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:35.953682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:37.956421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:37.961516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:39.964142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:39.968022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:41.971224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:41.975110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:43.977922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:43.981864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:45.985053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:45.989149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:47.992808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:47.996877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:50.000320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:50.005964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:52.009677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:52.014098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:54.017610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:54.023399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:56.026882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:47:56.030873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [282e1eb9eb159f34d9a6fac10bac821f634ff7c567d7339497dbea1114cc2478] <==
	I0929 09:38:14.116633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:38:44.120219       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7: exit status 1 (60.826991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-lh9zv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qghq7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vx2xv" [57bd21d6-20a9-46cb-bf7d-d51a2c29739e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:45:57.002358  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:46:15.700107  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
E0929 09:54:40.157852  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:54:40.365304783 +0000 UTC m=+5128.011930237
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-383226 describe po kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-383226 describe po kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-vx2xv
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-383226/192.168.94.2
Start Time:       Mon, 29 Sep 2025 09:36:12 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d86rf (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-d86rf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv to old-k8s-version-383226
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    14m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     14m (x4 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     14m (x3 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     14m (x6 over 17m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    8m15s (x27 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m19s                 kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard: exit status 1 (74.198144ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-vx2xv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-383226 logs kubernetes-dashboard-8694d4445c-vx2xv -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-383226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-383226
helpers_test.go:243: (dbg) docker inspect old-k8s-version-383226:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086",
	        "Created": "2025-09-29T09:34:38.375266388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:35:48.810030077Z",
	            "FinishedAt": "2025-09-29T09:35:47.766053108Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/hosts",
	        "LogPath": "/var/lib/docker/containers/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086/7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086-json.log",
	        "Name": "/old-k8s-version-383226",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-383226:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-383226",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d2e6800721f082bf3ea3ad2f778c71e89e08912d95911f8c195c1d248b8b086",
	                "LowerDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/768d18f42649f4b5782f40ecc1928fef28c427f21bda0883b12755f95e303b23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-383226",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-383226/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-383226",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-383226",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-383226",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8779a5e7de68063ae1be898d629bdb7cebf5b9087119cf86a1ddb0929e88abac",
	            "SandboxKey": "/var/run/docker/netns/8779a5e7de68",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-383226": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:25:e4:c3:7f:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a63824d25a59469f34d03b2b3a3d3f9286340373bc3c74439b9e2ad87eb7dbfe",
	                    "EndpointID": "34d175306776277a1faba9493dc693e3567154d18d3dd5acb8dbb70128bd39b5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-383226",
	                        "7d2e6800721f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-383226 -n old-k8s-version-383226
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-383226 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-383226 logs -n 25: (1.207662806s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-646399 sudo crio config                                                                                                                                                                                                             │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:35 UTC │
	│ delete  │ -p bridge-646399                                                                                                                                                                                                                              │ bridge-646399                │ jenkins │ v1.37.0 │ 29 Sep 25 09:35 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p embed-certs-463478 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:53:21 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:21.467563988Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d65f214e-7166-48e8-9c3c-71f9c51e08d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:34 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:34.467460320Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7e092491-a2a8-40fb-af02-0ca3c40f94cd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:34 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:34.467687774Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7e092491-a2a8-40fb-af02-0ca3c40f94cd name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:34 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:34.468352578Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=d853ecc1-436d-49dc-b9ed-e7c884c7e7c3 name=/runtime.v1.ImageService/PullImage
	Sep 29 09:53:34 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:34.927351112Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 29 09:53:35 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:35.468108522Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6385b732-07bb-4238-9d3e-1149b90b3321 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:35 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:35.468433994Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6385b732-07bb-4238-9d3e-1149b90b3321 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:46 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:46.467608226Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=027e94b1-37ea-4b5a-9eea-4b659d46ea1f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:46 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:46.467904035Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=027e94b1-37ea-4b5a-9eea-4b659d46ea1f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:50 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:50.468134133Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1292f492-1139-48a3-88b3-c2e8d1636dc1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:50 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:50.468479584Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1292f492-1139-48a3-88b3-c2e8d1636dc1 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:57 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:57.468179223Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b1396072-5c86-494a-94a1-94f13ea63674 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:53:57 old-k8s-version-383226 crio[557]: time="2025-09-29 09:53:57.468436487Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b1396072-5c86-494a-94a1-94f13ea63674 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:04 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:04.467926741Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a5d75dda-8c16-4709-bc4a-c74cc793f2b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:04 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:04.468265911Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a5d75dda-8c16-4709-bc4a-c74cc793f2b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:12 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:12.468195558Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3cad828c-5bbc-444e-a71d-1b0154fb6aa6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:12 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:12.468479553Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3cad828c-5bbc-444e-a71d-1b0154fb6aa6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:17 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:17.467999394Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b768d868-70f2-4b33-9cde-58ce443737c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:17 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:17.468316449Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b768d868-70f2-4b33-9cde-58ce443737c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:25 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:25.468314432Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3a178737-9c87-491a-af29-c8a0c9025f20 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:25 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:25.468552100Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3a178737-9c87-491a-af29-c8a0c9025f20 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:28 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:28.467991457Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=01e79055-91ed-4882-8bc9-35168359f93e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:28 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:28.468255397Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=01e79055-91ed-4882-8bc9-35168359f93e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:38 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:38.467495359Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c544a806-ab13-4246-8852-87d796f90b83 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:38 old-k8s-version-383226 crio[557]: time="2025-09-29 09:54:38.467745952Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c544a806-ab13-4246-8852-87d796f90b83 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f09dbde726894       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   40b2e581322fc       dashboard-metrics-scraper-5f989dc9cf-qwlrl
	d16e443ed650c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   965faa72c74e5       storage-provisioner
	cbc99b97a3843       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                     1                   3fa847a934caa       coredns-5dd5756b68-cwxnf
	fc0d0d64c4cd2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   93a0e999fb28b       busybox
	b44c7d38be7cf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   0e6a6d37467cd       kindnet-wz6rq
	b7730ad695c27       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   18 minutes ago      Running             kube-proxy                  1                   c9017a48e7b05       kube-proxy-g86rz
	0efde9fa2435d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   965faa72c74e5       storage-provisioner
	da45c5617ae88       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   18 minutes ago      Running             kube-apiserver              1                   37aea600c115c       kube-apiserver-old-k8s-version-383226
	32d9fda5cc39b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   18 minutes ago      Running             kube-scheduler              1                   37898ba1607f3       kube-scheduler-old-k8s-version-383226
	63b9f8f8d0ec0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   18 minutes ago      Running             kube-controller-manager     1                   5175803881e7b       kube-controller-manager-old-k8s-version-383226
	0d1a11e2d7b3f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                        1                   f2f8f4d736ebb       etcd-old-k8s-version-383226
	
	
	==> coredns [cbc99b97a384328da06f3312c734d7b8e538dcff484708c376e421f1ae89db34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55789 - 43854 "HINFO IN 5184595554245198180.5627202947282764251. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060921737s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-383226
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-383226
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=old-k8s-version-383226
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_34_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-383226
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:54:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:51:49 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:51:49 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:51:49 +0000   Mon, 29 Sep 2025 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:51:49 +0000   Mon, 29 Sep 2025 09:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-383226
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0d427e2d6b420688a79baa17a6c956
	  System UUID:                63eb07de-01db-42f8-9240-9b88f7ef75f9
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-cwxnf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-383226                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-wz6rq                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-383226             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-383226    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-g86rz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-383226             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-56tsv                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qwlrl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vx2xv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-383226 event: Registered Node old-k8s-version-383226 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-383226 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-383226 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-383226 event: Registered Node old-k8s-version-383226 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [0d1a11e2d7b3fb658bc4fc710774f7c66a90df230859619c58f1873e32ee7a89] <==
	{"level":"info","ts":"2025-09-29T09:35:57.329646Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T09:35:57.689371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.68943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.689456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-09-29T09:35:57.689474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.689504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-09-29T09:35:57.69054Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-383226 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T09:35:57.690556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T09:35:57.690873Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T09:35:57.691242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T09:35:57.691273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T09:35:57.694548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-09-29T09:35:57.696531Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T09:36:05.694336Z","caller":"traceutil/trace.go:171","msg":"trace[986107393] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"108.827245ms","start":"2025-09-29T09:36:05.585489Z","end":"2025-09-29T09:36:05.694317Z","steps":["trace[986107393] 'process raft request'  (duration: 108.697529ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:36:59.390187Z","caller":"traceutil/trace.go:171","msg":"trace[424009884] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"139.542235ms","start":"2025-09-29T09:36:59.250628Z","end":"2025-09-29T09:36:59.39017Z","steps":["trace[424009884] 'process raft request'  (duration: 139.397986ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:59.574073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.140043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T09:36:59.574164Z","caller":"traceutil/trace.go:171","msg":"trace[975101957] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:707; }","duration":"140.248373ms","start":"2025-09-29T09:36:59.433899Z","end":"2025-09-29T09:36:59.574148Z","steps":["trace[975101957] 'range keys from in-memory index tree'  (duration: 140.05711ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:45:58.122645Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2025-09-29T09:45:58.124428Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":969,"took":"1.448847ms","hash":1627961714}
	{"level":"info","ts":"2025-09-29T09:45:58.124472Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1627961714,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T09:50:58.127994Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1220}
	{"level":"info","ts":"2025-09-29T09:50:58.129159Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1220,"took":"838.547µs","hash":2999417790}
	{"level":"info","ts":"2025-09-29T09:50:58.129193Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2999417790,"revision":1220,"compact-revision":969}
	
	
	==> kernel <==
	 09:54:41 up  3:37,  0 users,  load average: 0.63, 0.45, 1.01
	Linux old-k8s-version-383226 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b44c7d38be7cff3ba699998aac743de0e4b4f31749e06739e8bb89aea0ff87a3] <==
	I0929 09:52:41.392931       1 main.go:301] handling current node
	I0929 09:52:51.392910       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:52:51.392960       1 main.go:301] handling current node
	I0929 09:53:01.400898       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:01.400927       1 main.go:301] handling current node
	I0929 09:53:11.400924       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:11.400956       1 main.go:301] handling current node
	I0929 09:53:21.391899       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:21.391935       1 main.go:301] handling current node
	I0929 09:53:31.392932       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:31.392971       1 main.go:301] handling current node
	I0929 09:53:41.395525       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:41.395556       1 main.go:301] handling current node
	I0929 09:53:51.393963       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:53:51.394008       1 main.go:301] handling current node
	I0929 09:54:01.393970       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:54:01.394003       1 main.go:301] handling current node
	I0929 09:54:11.400391       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:54:11.400437       1 main.go:301] handling current node
	I0929 09:54:21.392908       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:54:21.392946       1 main.go:301] handling current node
	I0929 09:54:31.394852       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:54:31.394897       1 main.go:301] handling current node
	I0929 09:54:41.395900       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0929 09:54:41.395940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [da45c5617ae88aa853d0e35427b7dc76ac2b9ebb0e4e1d666dc6db4eb7bd546e] <==
	W0929 09:51:00.556765       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:51:00.556827       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:51:00.556769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:51:00.558027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:51:59.467461       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:51:59.467486       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 09:52:00.557041       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:52:00.557084       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 09:52:00.557093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:52:00.558201       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:52:00.558299       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:52:00.558312       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 09:52:59.467427       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:52:59.467447       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 09:53:59.468426       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.16.37:443: connect: connection refused
	I0929 09:53:59.468455       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 09:54:00.557232       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:54:00.557280       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 09:54:00.557288       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:54:00.559464       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 09:54:00.559546       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 09:54:00.559558       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [63b9f8f8d0ec019fbffeb3a52a7e634a9b34fe34ad87e1960e1c15d0282cc91d] <==
	I0929 09:49:43.163946       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:50:12.795619       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:50:13.171355       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:50:42.800320       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:50:43.178812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:51:12.804752       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:51:13.185917       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:51:32.477926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="152.237µs"
	E0929 09:51:42.809066       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:51:43.192579       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:51:47.477377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="137.016µs"
	I0929 09:52:09.488607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="121.931µs"
	E0929 09:52:12.813815       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:52:13.199980       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:52:13.287769       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="125.018µs"
	E0929 09:52:42.818516       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:52:43.207468       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:53:12.822562       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:53:13.214173       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 09:53:42.826985       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:53:43.221817       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 09:53:46.477530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.112µs"
	I0929 09:53:57.478192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="132.482µs"
	E0929 09:54:12.832326       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 09:54:13.228297       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b7730ad695c272feca810f40fdf6d89ecf76608c2946f871afe0f3bff90fa953] <==
	I0929 09:36:01.052478       1 server_others.go:69] "Using iptables proxy"
	I0929 09:36:01.069155       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0929 09:36:01.100068       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:36:01.103152       1 server_others.go:152] "Using iptables Proxier"
	I0929 09:36:01.103299       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 09:36:01.103334       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 09:36:01.103393       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 09:36:01.103771       1 server.go:846] "Version info" version="v1.28.0"
	I0929 09:36:01.104005       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:01.104885       1 config.go:188] "Starting service config controller"
	I0929 09:36:01.104981       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 09:36:01.105077       1 config.go:315] "Starting node config controller"
	I0929 09:36:01.105113       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 09:36:01.106303       1 config.go:97] "Starting endpoint slice config controller"
	I0929 09:36:01.106326       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 09:36:01.205872       1 shared_informer.go:318] Caches are synced for node config
	I0929 09:36:01.205890       1 shared_informer.go:318] Caches are synced for service config
	I0929 09:36:01.207021       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32d9fda5cc39b4851a4b1de08738a1a3a7d5db22a27ccec955be87812975396a] <==
	I0929 09:35:58.805755       1 serving.go:348] Generated self-signed cert in-memory
	W0929 09:35:59.515764       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:35:59.515915       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 09:35:59.515941       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:35:59.515968       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:35:59.560380       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 09:35:59.560507       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:35:59.569522       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:35:59.569621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 09:35:59.573266       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 09:35:59.573368       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 09:35:59.670812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 09:53:35 old-k8s-version-383226 kubelet[699]: E0929 09:53:35.526025     699 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 29 09:53:35 old-k8s-version-383226 kubelet[699]: E0929 09:53:35.526085     699 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 29 09:53:35 old-k8s-version-383226 kubelet[699]: E0929 09:53:35.526293     699 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7gbzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe
{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolic
y:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-56tsv_kube-system(973bfe4d-76ba-4e0b-8add-1b82655dd602): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 29 09:53:35 old-k8s-version-383226 kubelet[699]: E0929 09:53:35.526357     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:53:36 old-k8s-version-383226 kubelet[699]: I0929 09:53:36.467168     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:53:36 old-k8s-version-383226 kubelet[699]: E0929 09:53:36.467516     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:53:46 old-k8s-version-383226 kubelet[699]: E0929 09:53:46.468173     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:53:50 old-k8s-version-383226 kubelet[699]: E0929 09:53:50.468703     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:53:51 old-k8s-version-383226 kubelet[699]: I0929 09:53:51.467718     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:53:51 old-k8s-version-383226 kubelet[699]: E0929 09:53:51.468026     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:53:57 old-k8s-version-383226 kubelet[699]: E0929 09:53:57.468741     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:54:02 old-k8s-version-383226 kubelet[699]: I0929 09:54:02.467660     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:54:02 old-k8s-version-383226 kubelet[699]: E0929 09:54:02.468076     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:54:04 old-k8s-version-383226 kubelet[699]: E0929 09:54:04.468540     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:54:12 old-k8s-version-383226 kubelet[699]: E0929 09:54:12.468731     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:54:14 old-k8s-version-383226 kubelet[699]: I0929 09:54:14.467235     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:54:14 old-k8s-version-383226 kubelet[699]: E0929 09:54:14.467640     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:54:17 old-k8s-version-383226 kubelet[699]: E0929 09:54:17.468625     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:54:25 old-k8s-version-383226 kubelet[699]: E0929 09:54:25.468898     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	Sep 29 09:54:26 old-k8s-version-383226 kubelet[699]: I0929 09:54:26.467380     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:54:26 old-k8s-version-383226 kubelet[699]: E0929 09:54:26.467748     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:54:28 old-k8s-version-383226 kubelet[699]: E0929 09:54:28.468523     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vx2xv" podUID="57bd21d6-20a9-46cb-bf7d-d51a2c29739e"
	Sep 29 09:54:38 old-k8s-version-383226 kubelet[699]: I0929 09:54:38.467178     699 scope.go:117] "RemoveContainer" containerID="f09dbde726894f5793bf48ea8fc370d14067a28900003367d6455255d4f7da05"
	Sep 29 09:54:38 old-k8s-version-383226 kubelet[699]: E0929 09:54:38.467541     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qwlrl_kubernetes-dashboard(617042f5-5b7a-4f44-9eb9-8bb3aa96f99f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qwlrl" podUID="617042f5-5b7a-4f44-9eb9-8bb3aa96f99f"
	Sep 29 09:54:38 old-k8s-version-383226 kubelet[699]: E0929 09:54:38.468047     699 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-56tsv" podUID="973bfe4d-76ba-4e0b-8add-1b82655dd602"
	
	
	==> storage-provisioner [0efde9fa2435d78ffaf14f9f1ce132db8dc31af8adfabfd2e82ab66356107690] <==
	I0929 09:36:00.967969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:36:30.971245       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d16e443ed650cba7b6a82e26e02342c9a54d95c883ce25457b31bbdcee42b571] <==
	I0929 09:36:31.755823       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 09:36:31.770468       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 09:36:31.770552       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 09:36:49.169113       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 09:36:49.169203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f22a9583-c2b0-463c-8e07-b1d35b2e39b6", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b became leader
	I0929 09:36:49.169317       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b!
	I0929 09:36:49.269643       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-383226_3b568e21-2be8-4088-9270-5d3a50a9340b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-383226 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv: exit status 1 (59.043798ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-56tsv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-vx2xv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-383226 describe pod metrics-server-57f55c9bc5-56tsv kubernetes-dashboard-8694d4445c-vx2xv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z8j9m" [ed919a2e-20ad-45ae-af2e-22135bc8c096] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:55:28.742936826 +0000 UTC m=+5176.389562268
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-463478 describe po kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-463478 describe po kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-z8j9m
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-463478/192.168.103.2
Start Time:       Mon, 29 Sep 2025 09:36:53 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kvvg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-6kvvg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m to embed-certs-463478
Warning  Failed     16m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m26s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m50s (x51 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard: exit status 1 (71.303508ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-z8j9m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-463478 logs kubernetes-dashboard-855c9754f9-z8j9m -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-463478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-463478
helpers_test.go:243: (dbg) docker inspect embed-certs-463478:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472",
	        "Created": "2025-09-29T09:35:29.199260963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 730547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:36:37.630678348Z",
	            "FinishedAt": "2025-09-29T09:36:36.782265682Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/hostname",
	        "HostsPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/hosts",
	        "LogPath": "/var/lib/docker/containers/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472/a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472-json.log",
	        "Name": "/embed-certs-463478",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-463478:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-463478",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a44e8abd6d363d54061e893dd11b69a3f2fc61f1ec8f2b3eb818b97212717472",
	                "LowerDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cd03ea948d4d6c43733b56a25a2c568eb64d5074800fd1f7cd5ee8e84b38b58/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-463478",
	                "Source": "/var/lib/docker/volumes/embed-certs-463478/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-463478",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-463478",
	                "name.minikube.sigs.k8s.io": "embed-certs-463478",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f0889acdf98616ba6207e0be28c4a7879350d4fb5132f8ec3f71dee8d95efea",
	            "SandboxKey": "/var/run/docker/netns/7f0889acdf98",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-463478": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9d:20:51:16:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f47a716d00fa0ed0049865c866d39f32a13880b9157b9b533e4e9df61933299f",
	                    "EndpointID": "cbc0261eb6c9b48debb130f9cd55dc628df60a048726e90d4d7d19671a59d5fa",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-463478",
	                        "a44e8abd6d36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-463478 -n embed-certs-463478
E0929 09:55:29.041594  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-463478 logs -n 25
E0929 09:55:29.301670  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-463478 logs -n 25: (1.208820106s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ stop    │ -p newest-cni-879079 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                                                                                                    │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                                                                                          │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ image   │ old-k8s-version-383226 image list --format=json                                                                                                                                                                                               │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ unpause │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                                                                                     │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                                                                                     │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:54:10 embed-certs-463478 crio[560]: time="2025-09-29 09:54:10.406105901Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d7a24ee7-745d-43e8-87a1-991ee351a752 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:21 embed-certs-463478 crio[560]: time="2025-09-29 09:54:21.406341829Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1180fc7e-74d4-4a42-8f42-2d47a8252e8b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:21 embed-certs-463478 crio[560]: time="2025-09-29 09:54:21.406630175Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1180fc7e-74d4-4a42-8f42-2d47a8252e8b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:23 embed-certs-463478 crio[560]: time="2025-09-29 09:54:23.406117455Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=67673e84-8489-42a8-8228-6130c7df2fe6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:23 embed-certs-463478 crio[560]: time="2025-09-29 09:54:23.406472265Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=67673e84-8489-42a8-8228-6130c7df2fe6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:33 embed-certs-463478 crio[560]: time="2025-09-29 09:54:33.406044696Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=976cb9db-5a0a-49ea-a8c2-45d90add9233 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:33 embed-certs-463478 crio[560]: time="2025-09-29 09:54:33.406318972Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=976cb9db-5a0a-49ea-a8c2-45d90add9233 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:35 embed-certs-463478 crio[560]: time="2025-09-29 09:54:35.406069087Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dbb96f2d-83be-4f6e-b1f5-9ccd61a0cbbc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:35 embed-certs-463478 crio[560]: time="2025-09-29 09:54:35.406421843Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=dbb96f2d-83be-4f6e-b1f5-9ccd61a0cbbc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:47 embed-certs-463478 crio[560]: time="2025-09-29 09:54:47.405780572Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2db64721-ba4f-4c23-81ac-f337e5ecfb11 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:47 embed-certs-463478 crio[560]: time="2025-09-29 09:54:47.406071288Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2db64721-ba4f-4c23-81ac-f337e5ecfb11 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:49 embed-certs-463478 crio[560]: time="2025-09-29 09:54:49.406379520Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7b0f25ac-a9b4-4105-a98e-23dbbdf50e88 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:49 embed-certs-463478 crio[560]: time="2025-09-29 09:54:49.406695006Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=7b0f25ac-a9b4-4105-a98e-23dbbdf50e88 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:00 embed-certs-463478 crio[560]: time="2025-09-29 09:55:00.406160199Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=af036c78-6aa9-4845-9166-340845bd67c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:00 embed-certs-463478 crio[560]: time="2025-09-29 09:55:00.406423901Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=af036c78-6aa9-4845-9166-340845bd67c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:01 embed-certs-463478 crio[560]: time="2025-09-29 09:55:01.405962932Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=adf8e6c6-dccf-4f05-bac1-a4ac298dd038 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:01 embed-certs-463478 crio[560]: time="2025-09-29 09:55:01.406280573Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=adf8e6c6-dccf-4f05-bac1-a4ac298dd038 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:14 embed-certs-463478 crio[560]: time="2025-09-29 09:55:14.405705770Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c8f3c787-2cf1-4502-9db3-1573325a0b5b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:14 embed-certs-463478 crio[560]: time="2025-09-29 09:55:14.406002689Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c8f3c787-2cf1-4502-9db3-1573325a0b5b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:16 embed-certs-463478 crio[560]: time="2025-09-29 09:55:16.406074331Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e1d77543-3210-4b14-9cfb-e8381206a40b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:16 embed-certs-463478 crio[560]: time="2025-09-29 09:55:16.406391012Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e1d77543-3210-4b14-9cfb-e8381206a40b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:27 embed-certs-463478 crio[560]: time="2025-09-29 09:55:27.405592315Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=51559320-fd61-4f3e-8058-6d93d90c454e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:27 embed-certs-463478 crio[560]: time="2025-09-29 09:55:27.405897653Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=51559320-fd61-4f3e-8058-6d93d90c454e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:29 embed-certs-463478 crio[560]: time="2025-09-29 09:55:29.405516217Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=57489b36-8ad7-4d55-9128-d9a8ce681ba6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:29 embed-certs-463478 crio[560]: time="2025-09-29 09:55:29.405850380Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=57489b36-8ad7-4d55-9128-d9a8ce681ba6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	48be25da7de8c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   0531764e1e569       dashboard-metrics-scraper-6ffb444bf9-cwq99
	21c186f4ce38f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   c464ef39d6367       storage-provisioner
	ecc81e8f7932b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   c683e79f1a94e       coredns-66bc5c9577-ng4bv
	b690bc729bd3a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   07829e3fd5c6d       kindnet-9nmlh
	039d40493c8bf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   412dbba55ab2e       busybox
	28dff9304995e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   18 minutes ago      Running             kube-proxy                  1                   1b6952857fdbd       kube-proxy-k4ld5
	d53267deead34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   c464ef39d6367       storage-provisioner
	d6f847bce3be4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   18 minutes ago      Running             kube-scheduler              1                   e9be904d3fd8a       kube-scheduler-embed-certs-463478
	3126402649a15       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   007a32fcff623       etcd-embed-certs-463478
	a69cb8e81f046       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 minutes ago      Running             kube-controller-manager     1                   9538c1f652fb8       kube-controller-manager-embed-certs-463478
	280012c3ca262       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 minutes ago      Running             kube-apiserver              1                   f8b9999252fb9       kube-apiserver-embed-certs-463478
	
	
	==> coredns [ecc81e8f7932b0e3feb6acfdd2c812326afd5178d9777c7b3d558e0d8d36a8f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47970 - 5425 "HINFO IN 974549403489356571.6881221070274710199. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.065518313s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-463478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-463478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=embed-certs-463478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_35_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:35:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-463478
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:55:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:54:50 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:54:50 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:54:50 +0000   Mon, 29 Sep 2025 09:35:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:54:50 +0000   Mon, 29 Sep 2025 09:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-463478
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffa6578ce1684e6984ae5415f52f912e
	  System UUID:                72ae182d-f4a0-49ca-b485-ab8af5072e06
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-ng4bv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-463478                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-9nmlh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-463478             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-463478    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-k4ld5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-463478             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-skth6               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cwq99    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z8j9m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-463478 event: Registered Node embed-certs-463478 in Controller
	  Normal  NodeReady                19m                kubelet          Node embed-certs-463478 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-463478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-463478 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-463478 event: Registered Node embed-certs-463478 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [3126402649a15b187acca268b80334b4a484cf819155f77c981e6e7b90d59267] <==
	{"level":"warn","ts":"2025-09-29T09:36:47.462872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.471366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.485974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.502345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.511972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.519167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.532229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.539738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.547010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:36:47.612228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35010","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T09:36:58.486179Z","caller":"traceutil/trace.go:172","msg":"trace[1830524343] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"127.978568ms","start":"2025-09-29T09:36:58.358182Z","end":"2025-09-29T09:36:58.486160Z","steps":["trace[1830524343] 'process raft request'  (duration: 127.851067ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:58.684968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.513407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ng4bv\" limit:1 ","response":"range_response_count:1 size:5920"}
	{"level":"info","ts":"2025-09-29T09:36:58.685063Z","caller":"traceutil/trace.go:172","msg":"trace[1656099704] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ng4bv; range_end:; response_count:1; response_revision:653; }","duration":"121.635853ms","start":"2025-09-29T09:36:58.563409Z","end":"2025-09-29T09:36:58.685045Z","steps":["trace[1656099704] 'range keys from in-memory index tree'  (duration: 121.293943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T09:36:59.896783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.046829ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788991598031837 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" mod_revision:654 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" value_size:7926 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-463478\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T09:36:59.896922Z","caller":"traceutil/trace.go:172","msg":"trace[1390110800] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"290.039816ms","start":"2025-09-29T09:36:59.606867Z","end":"2025-09-29T09:36:59.896906Z","steps":["trace[1390110800] 'process raft request'  (duration: 155.273801ms)","trace[1390110800] 'compare'  (duration: 133.939622ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T09:37:00.186419Z","caller":"traceutil/trace.go:172","msg":"trace[507678109] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"122.947367ms","start":"2025-09-29T09:37:00.063450Z","end":"2025-09-29T09:37:00.186397Z","steps":["trace[507678109] 'read index received'  (duration: 122.938064ms)","trace[507678109] 'applied index is now lower than readState.Index'  (duration: 7.983µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T09:37:00.187099Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.614697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ng4bv\" limit:1 ","response":"range_response_count:1 size:5920"}
	{"level":"info","ts":"2025-09-29T09:37:00.187184Z","caller":"traceutil/trace.go:172","msg":"trace[1405870858] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ng4bv; range_end:; response_count:1; response_revision:658; }","duration":"123.729492ms","start":"2025-09-29T09:37:00.063438Z","end":"2025-09-29T09:37:00.187167Z","steps":["trace[1405870858] 'agreement among raft nodes before linearized reading'  (duration: 123.041051ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:37:00.187406Z","caller":"traceutil/trace.go:172","msg":"trace[1500197932] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"191.840705ms","start":"2025-09-29T09:36:59.995550Z","end":"2025-09-29T09:37:00.187390Z","steps":["trace[1500197932] 'process raft request'  (duration: 190.89814ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T09:46:46.960954Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1026}
	{"level":"info","ts":"2025-09-29T09:46:46.979458Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1026,"took":"18.198858ms","hash":3024426036,"current-db-size-bytes":3256320,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1331200,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T09:46:46.979500Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3024426036,"revision":1026,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T09:51:46.966849Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1307}
	{"level":"info","ts":"2025-09-29T09:51:46.969912Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1307,"took":"2.747865ms","hash":3921212307,"current-db-size-bytes":3256320,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T09:51:46.969952Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3921212307,"revision":1307,"compact-revision":1026}
	
	
	==> kernel <==
	 09:55:30 up  3:37,  0 users,  load average: 0.83, 0.54, 1.01
	Linux embed-certs-463478 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b690bc729bd3adf650e3273ee975c81808254796a7fd4cb6050342571627d047] <==
	I0929 09:53:29.381007       1 main.go:301] handling current node
	I0929 09:53:39.381913       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:53:39.381960       1 main.go:301] handling current node
	I0929 09:53:49.380770       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:53:49.380811       1 main.go:301] handling current node
	I0929 09:53:59.385903       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:53:59.385933       1 main.go:301] handling current node
	I0929 09:54:09.381932       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:09.381965       1 main.go:301] handling current node
	I0929 09:54:19.387984       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:19.388015       1 main.go:301] handling current node
	I0929 09:54:29.381941       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:29.382021       1 main.go:301] handling current node
	I0929 09:54:39.382935       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:39.382966       1 main.go:301] handling current node
	I0929 09:54:49.379902       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:49.379938       1 main.go:301] handling current node
	I0929 09:54:59.385923       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:54:59.385955       1 main.go:301] handling current node
	I0929 09:55:09.383976       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:55:09.384014       1 main.go:301] handling current node
	I0929 09:55:19.380454       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:55:19.380488       1 main.go:301] handling current node
	I0929 09:55:29.379957       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0929 09:55:29.380017       1 main.go:301] handling current node
	
	
	==> kube-apiserver [280012c3ca2625ba7056bf2133ebda307036c512570fd14d7f3b31dcf4a119d2] <==
	E0929 09:51:49.194190       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:51:49.194203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:51:49.194228       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:51:49.195350       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:52:49.194934       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:52:49.194995       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:52:49.195012       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:52:49.196109       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:52:49.196173       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:52:49.196192       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:54:49.195848       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:54:49.195905       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:54:49.195921       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:54:49.196930       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:54:49.197012       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:54:49.197025       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a69cb8e81f04649b253d6861374bcc65c81b43d9fa6fc6723b93faf0ed100b7c] <==
	I0929 09:49:22.890943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:49:52.808056       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:49:52.897913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:50:22.812387       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:50:22.904368       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:50:52.817345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:50:52.910810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:22.821263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:22.917149       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:52.826261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:52.925350       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:22.830201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:22.932279       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:52.834501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:52.939215       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:22.839105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:22.946651       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:52.843181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:52.953578       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:22.846926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:22.960095       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:52.850782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:52.966880       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:55:22.854807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:55:22.973672       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28dff9304995e2646cdfd13d90c8aad5e138841b37a9d4d4feb0dfac16fd990b] <==
	I0929 09:36:49.024365       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:36:49.082075       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:36:49.183081       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:36:49.183184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0929 09:36:49.183289       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:36:49.206465       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:36:49.206611       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:36:49.214130       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:36:49.214505       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:36:49.214543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:49.216163       1 config.go:200] "Starting service config controller"
	I0929 09:36:49.216229       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:36:49.216260       1 config.go:309] "Starting node config controller"
	I0929 09:36:49.216307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:36:49.216329       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:36:49.216334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:36:49.216344       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:36:49.216353       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:36:49.316413       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 09:36:49.316447       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 09:36:49.316463       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:36:49.316714       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d6f847bce3be4dae3617fa06d4c956cfb0338ebbf44a8256d68cb664d24d04db] <==
	I0929 09:36:47.276898       1 serving.go:386] Generated self-signed cert in-memory
	I0929 09:36:48.603316       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:36:48.603346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:36:48.613728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 09:36:48.613770       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 09:36:48.613847       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:36:48.613864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:36:48.613881       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.613888       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.614899       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:36:48.615042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:36:48.714857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 09:36:48.714990       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 09:36:48.715010       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:54:41 embed-certs-463478 kubelet[697]: E0929 09:54:41.405791     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:54:45 embed-certs-463478 kubelet[697]: E0929 09:54:45.537786     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139685537521593  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:54:45 embed-certs-463478 kubelet[697]: E0929 09:54:45.537824     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139685537521593  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:54:47 embed-certs-463478 kubelet[697]: E0929 09:54:47.406407     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:54:49 embed-certs-463478 kubelet[697]: E0929 09:54:49.407049     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:54:55 embed-certs-463478 kubelet[697]: I0929 09:54:55.405760     697 scope.go:117] "RemoveContainer" containerID="48be25da7de8c77b3e31aae6588b1824cfef33ca78dd821d0551068a68c8f942"
	Sep 29 09:54:55 embed-certs-463478 kubelet[697]: E0929 09:54:55.405973     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:54:55 embed-certs-463478 kubelet[697]: E0929 09:54:55.538978     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139695538730746  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:54:55 embed-certs-463478 kubelet[697]: E0929 09:54:55.539016     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139695538730746  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:00 embed-certs-463478 kubelet[697]: E0929 09:55:00.406771     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:55:01 embed-certs-463478 kubelet[697]: E0929 09:55:01.406622     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:55:05 embed-certs-463478 kubelet[697]: E0929 09:55:05.540374     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139705540128127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:05 embed-certs-463478 kubelet[697]: E0929 09:55:05.540411     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139705540128127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:10 embed-certs-463478 kubelet[697]: I0929 09:55:10.405564     697 scope.go:117] "RemoveContainer" containerID="48be25da7de8c77b3e31aae6588b1824cfef33ca78dd821d0551068a68c8f942"
	Sep 29 09:55:10 embed-certs-463478 kubelet[697]: E0929 09:55:10.405723     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:55:14 embed-certs-463478 kubelet[697]: E0929 09:55:14.406399     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:55:15 embed-certs-463478 kubelet[697]: E0929 09:55:15.541489     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139715541251804  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:15 embed-certs-463478 kubelet[697]: E0929 09:55:15.541521     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139715541251804  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:16 embed-certs-463478 kubelet[697]: E0929 09:55:16.406740     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	Sep 29 09:55:25 embed-certs-463478 kubelet[697]: I0929 09:55:25.406269     697 scope.go:117] "RemoveContainer" containerID="48be25da7de8c77b3e31aae6588b1824cfef33ca78dd821d0551068a68c8f942"
	Sep 29 09:55:25 embed-certs-463478 kubelet[697]: E0929 09:55:25.406460     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwq99_kubernetes-dashboard(b74cb377-cf5c-4636-8099-44545d0374df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwq99" podUID="b74cb377-cf5c-4636-8099-44545d0374df"
	Sep 29 09:55:25 embed-certs-463478 kubelet[697]: E0929 09:55:25.542863     697 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139725542581036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:25 embed-certs-463478 kubelet[697]: E0929 09:55:25.542901     697 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139725542581036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:55:27 embed-certs-463478 kubelet[697]: E0929 09:55:27.406184     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-skth6" podUID="8bc7d87b-3756-4e24-8c05-f5c637ce8065"
	Sep 29 09:55:29 embed-certs-463478 kubelet[697]: E0929 09:55:29.406130     697 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z8j9m" podUID="ed919a2e-20ad-45ae-af2e-22135bc8c096"
	
	
	==> storage-provisioner [21c186f4ce38f6efbee0a9497cb3d49ff85ea8b2e21eeef27cbd9b97465b1c4e] <==
	W0929 09:55:05.112596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:07.116011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:07.121303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:09.124312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:09.128222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:11.131220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:11.134955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:13.138419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:13.142496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:15.145786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:15.151190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:17.155115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:17.159556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:19.163497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:19.167387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:21.170174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:21.174080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:23.177103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:23.182293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:25.185329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:25.189360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:27.192642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:27.196597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:29.200374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:29.206636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d53267deead34818cb1d3f144a0f60ec9c2e3038891942d6241709ff777a2486] <==
	I0929 09:36:48.960298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:37:18.965205       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-463478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m: exit status 1 (63.02128ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-skth6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-z8j9m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-463478 describe pod metrics-server-746fcd58dc-skth6 kubernetes-dashboard-855c9754f9-z8j9m: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8kf7" [5cf5352a-bd50-49be-812d-0483e26398c0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:47:18.486366  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:56:13.915824193 +0000 UTC m=+5221.562449645
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-730717 describe po kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-730717 describe po kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-d8kf7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-730717/192.168.76.2
Start Time:       Mon, 29 Sep 2025 09:37:30 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jrz2z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-jrz2z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7 to no-preload-730717
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m37s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m57s (x51 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard: exit status 1 (72.455233ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-d8kf7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-730717 logs kubernetes-dashboard-855c9754f9-d8kf7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-730717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-730717
helpers_test.go:243: (dbg) docker inspect no-preload-730717:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486",
	        "Created": "2025-09-29T09:35:52.393159276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 740038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:37:17.124718Z",
	            "FinishedAt": "2025-09-29T09:37:16.3014353Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/hostname",
	        "HostsPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/hosts",
	        "LogPath": "/var/lib/docker/containers/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486/f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486-json.log",
	        "Name": "/no-preload-730717",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-730717:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-730717",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f03b3d6dc029ff9cd69e7e4c3bd96770f72537821aa9b55c80b0c2d469b2a486",
	                "LowerDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b13d13e718a605b33bad67626c1cf8784cd64c71ec8c1cf72aa47d64d928ebdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-730717",
	                "Source": "/var/lib/docker/volumes/no-preload-730717/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-730717",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-730717",
	                "name.minikube.sigs.k8s.io": "no-preload-730717",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52711fe838b0d0f85826aa630431aa0175c6ce826754c3b8e97871aa8a75a141",
	            "SandboxKey": "/var/run/docker/netns/52711fe838b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-730717": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:23:a0:fc:0a:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d924499e4b51e9f2d7e1ded72ae4f935ea286dd164b95d482bfdb7bef2c79707",
	                    "EndpointID": "bfbc4b7c48c4645a3dae999303c53fbe66f702e7f33d480f1ff9f332009aaf21",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-730717",
	                        "f03b3d6dc029"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-730717 -n no-preload-730717
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-730717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-730717 logs -n 25: (1.196354617s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ image   │ newest-cni-879079 image list --format=json                                                                                                                               │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ pause   │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                              │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ unpause │ -p newest-cni-879079 --alsologtostderr -v=1                                                                                                                              │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                     │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ delete  │ -p newest-cni-879079                                                                                                                                                     │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                              │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ image   │ old-k8s-version-383226 image list --format=json                                                                                                                          │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ unpause │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ image   │ embed-certs-463478 image list --format=json                                                                                                                              │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ pause   │ -p embed-certs-463478 --alsologtostderr -v=1                                                                                                                             │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ unpause │ -p embed-certs-463478 --alsologtostderr -v=1                                                                                                                             │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ delete  │ -p embed-certs-463478                                                                                                                                                    │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ delete  │ -p embed-certs-463478                                                                                                                                                    │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:54:54 no-preload-730717 crio[562]: time="2025-09-29 09:54:54.093627774Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=34ac28c2-15ba-47a5-8359-f1f361a1219f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:55 no-preload-730717 crio[562]: time="2025-09-29 09:54:55.092504546Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=57effe2c-3552-4ffd-b780-05e3f5836649 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:54:55 no-preload-730717 crio[562]: time="2025-09-29 09:54:55.092751473Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=57effe2c-3552-4ffd-b780-05e3f5836649 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:09 no-preload-730717 crio[562]: time="2025-09-29 09:55:09.092351016Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c24daacf-4c52-482a-8dbe-f30759750e2a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:09 no-preload-730717 crio[562]: time="2025-09-29 09:55:09.092554214Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c24daacf-4c52-482a-8dbe-f30759750e2a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:10 no-preload-730717 crio[562]: time="2025-09-29 09:55:10.093360060Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9e6fede7-f002-4110-ab05-2fcc88ae6e84 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:10 no-preload-730717 crio[562]: time="2025-09-29 09:55:10.093686073Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9e6fede7-f002-4110-ab05-2fcc88ae6e84 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:21 no-preload-730717 crio[562]: time="2025-09-29 09:55:21.093018188Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=77c03e10-1b10-4316-8457-f0613dc81a37 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:21 no-preload-730717 crio[562]: time="2025-09-29 09:55:21.093265935Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=77c03e10-1b10-4316-8457-f0613dc81a37 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:25 no-preload-730717 crio[562]: time="2025-09-29 09:55:25.092581455Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3b8f5fbd-d435-43b7-9a91-5034c5d58e49 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:25 no-preload-730717 crio[562]: time="2025-09-29 09:55:25.092859434Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3b8f5fbd-d435-43b7-9a91-5034c5d58e49 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:32 no-preload-730717 crio[562]: time="2025-09-29 09:55:32.093019233Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=257fcc43-10c9-4228-8a55-ebbaa5f17c21 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:32 no-preload-730717 crio[562]: time="2025-09-29 09:55:32.093296517Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=257fcc43-10c9-4228-8a55-ebbaa5f17c21 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:37 no-preload-730717 crio[562]: time="2025-09-29 09:55:37.092599243Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=143e2112-7539-4869-90cc-a9c832f39348 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:37 no-preload-730717 crio[562]: time="2025-09-29 09:55:37.092897408Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=143e2112-7539-4869-90cc-a9c832f39348 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:46 no-preload-730717 crio[562]: time="2025-09-29 09:55:46.092819186Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=773eac42-8991-4024-9532-bd2785999e52 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:46 no-preload-730717 crio[562]: time="2025-09-29 09:55:46.093161933Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=773eac42-8991-4024-9532-bd2785999e52 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:49 no-preload-730717 crio[562]: time="2025-09-29 09:55:49.092928837Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6a55bb14-3278-4f69-8b4c-5ef7aeb52372 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:49 no-preload-730717 crio[562]: time="2025-09-29 09:55:49.093186675Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6a55bb14-3278-4f69-8b4c-5ef7aeb52372 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:59 no-preload-730717 crio[562]: time="2025-09-29 09:55:59.092812148Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e36bbf19-b36a-4d5c-88aa-c093f81d5c85 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:59 no-preload-730717 crio[562]: time="2025-09-29 09:55:59.093051657Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e36bbf19-b36a-4d5c-88aa-c093f81d5c85 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:04 no-preload-730717 crio[562]: time="2025-09-29 09:56:04.093693100Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9888624e-03e2-4d57-992c-06647f0feb5e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:04 no-preload-730717 crio[562]: time="2025-09-29 09:56:04.094021790Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9888624e-03e2-4d57-992c-06647f0feb5e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:12 no-preload-730717 crio[562]: time="2025-09-29 09:56:12.092913086Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=dbcf96db-1669-4552-9e37-03ed906de725 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:12 no-preload-730717 crio[562]: time="2025-09-29 09:56:12.093155465Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=dbcf96db-1669-4552-9e37-03ed906de725 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c3192cb52ef9a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   46f2804480b20       dashboard-metrics-scraper-6ffb444bf9-vrtpm
	bdf81a55b041d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   a10dc97aa6f13       storage-provisioner
	b42daf67456ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   329fa422a72e3       coredns-66bc5c9577-ncwp4
	2525216b46e99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   dc691a9058172       busybox
	9ac0db1de5c9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   a10dc97aa6f13       storage-provisioner
	b322c8a93a311       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   18 minutes ago      Running             kube-proxy                  1                   da67f24f8ba06       kube-proxy-4bmgw
	eed7243881418       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   34b363fa78c75       kindnet-97tnr
	1d1678bd6daae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 minutes ago      Running             kube-apiserver              1                   9b42da1f49df5       kube-apiserver-no-preload-730717
	8c5200e560089       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 minutes ago      Running             kube-controller-manager     1                   acc1f14dd813e       kube-controller-manager-no-preload-730717
	2dfa3eec550c6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   18 minutes ago      Running             kube-scheduler              1                   92c6e99773f89       kube-scheduler-no-preload-730717
	9a7e8ebe2c7f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   a087c6efc4edc       etcd-no-preload-730717
	
	
	==> coredns [b42daf67456ad57382aaa4b3197eceb499c4f8125ab0d76af7df60ce5d3ca961] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52547 - 5251 "HINFO IN 3276868380242433564.5868470022607830145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.474034407s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-730717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-730717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=no-preload-730717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_36_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:36:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-730717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:56:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:51:42 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:51:42 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:51:42 +0000   Mon, 29 Sep 2025 09:36:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:51:42 +0000   Mon, 29 Sep 2025 09:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-730717
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 5521e6bf6c6b43289a49004b78ac9a1f
	  System UUID:                cf880771-51a4-4a5c-81a6-14d707678d39
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-ncwp4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-730717                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-97tnr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-730717              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-730717     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4bmgw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-730717              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-42r64               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vrtpm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d8kf7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-730717 event: Registered Node no-preload-730717 in Controller
	  Normal  NodeReady                19m                kubelet          Node no-preload-730717 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-730717 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node no-preload-730717 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-730717 event: Registered Node no-preload-730717 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [9a7e8ebe2c7f8235a975702327b3918be43c56992c94e1e2d62e3a60dacdf738] <==
	{"level":"warn","ts":"2025-09-29T09:37:26.002953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.010006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.016483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.023363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.030942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.046768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.055141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.062307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.068683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.074983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.082001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.089063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.096641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.104587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.111363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.126053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.133101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.140139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:37:26.183884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T09:47:25.658594Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1021}
	{"level":"info","ts":"2025-09-29T09:47:25.676814Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1021,"took":"17.896107ms","hash":1696462530,"current-db-size-bytes":3182592,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T09:47:25.676891Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1696462530,"revision":1021,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T09:52:25.664222Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1300}
	{"level":"info","ts":"2025-09-29T09:52:25.667333Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1300,"took":"2.807219ms","hash":3974303621,"current-db-size-bytes":3182592,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1863680,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T09:52:25.667377Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3974303621,"revision":1300,"compact-revision":1021}
	
	
	==> kernel <==
	 09:56:15 up  3:38,  0 users,  load average: 0.51, 0.49, 0.97
	Linux no-preload-730717 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [eed72438814186b709516f48ea9db82d0175fe6211916cae17d158915dc933a9] <==
	I0929 09:54:07.967950       1 main.go:301] handling current node
	I0929 09:54:17.968912       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:54:17.968943       1 main.go:301] handling current node
	I0929 09:54:27.975931       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:54:27.975963       1 main.go:301] handling current node
	I0929 09:54:37.975238       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:54:37.975271       1 main.go:301] handling current node
	I0929 09:54:47.969628       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:54:47.969682       1 main.go:301] handling current node
	I0929 09:54:57.967122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:54:57.967155       1 main.go:301] handling current node
	I0929 09:55:07.968013       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:07.968061       1 main.go:301] handling current node
	I0929 09:55:17.970468       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:17.970505       1 main.go:301] handling current node
	I0929 09:55:27.975579       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:27.975626       1 main.go:301] handling current node
	I0929 09:55:37.974814       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:37.974864       1 main.go:301] handling current node
	I0929 09:55:47.969791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:47.969823       1 main.go:301] handling current node
	I0929 09:55:57.967595       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:55:57.967630       1 main.go:301] handling current node
	I0929 09:56:07.969296       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0929 09:56:07.969329       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d1678bd6daaee7593cf308b3b04fde00b41f17a7641d4cfd2833778f925bfc1] <==
	E0929 09:52:27.662589       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:52:27.662645       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:52:27.662665       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:52:27.663800       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:53:27.663116       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:53:27.663179       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:53:27.663192       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:53:27.664262       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:53:27.664309       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:53:27.664319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:55:27.663650       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:55:27.663720       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:55:27.663734       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:55:27.664795       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:55:27.664875       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:55:27.664887       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8c5200e560089994a461092d833964ba4100be86716c520527a31816beee515c] <==
	I0929 09:50:00.253460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:50:30.173768       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:50:30.260262       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:00.178264       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:00.267071       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:30.182555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:30.274019       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:00.186329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:00.280309       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:30.191224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:30.287189       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:00.195356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:00.294324       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:30.199967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:30.301441       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:00.204516       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:00.308812       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:30.208958       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:30.315367       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:55:00.212538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:55:00.322309       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:55:30.217142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:55:30.330462       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:56:00.221380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:56:00.337149       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b322c8a93a311d0675ba0aa0a333a4ca0b835a54321e9b9203627668790dd927] <==
	I0929 09:37:27.609505       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:37:27.667898       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:37:27.768685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:37:27.768724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 09:37:27.768881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:37:27.788888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:37:27.788954       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:37:27.793942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:37:27.794420       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:37:27.794463       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:37:27.795689       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:37:27.795703       1 config.go:200] "Starting service config controller"
	I0929 09:37:27.795718       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:37:27.795718       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:37:27.795802       1 config.go:309] "Starting node config controller"
	I0929 09:37:27.795822       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:37:27.795846       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:37:27.795864       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:37:27.795902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:37:27.895906       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 09:37:27.897093       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 09:37:27.897145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2dfa3eec550c6076517250ed12f57707e7490bb65f701309138b1198d6e23007] <==
	I0929 09:37:25.635230       1 serving.go:386] Generated self-signed cert in-memory
	W0929 09:37:26.593585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:37:26.593618       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 09:37:26.593631       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:37:26.593640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:37:26.649547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:37:26.649653       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:37:26.652483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:37:26.652575       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:37:26.653668       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:37:26.653799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:37:26.752744       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:55:32 no-preload-730717 kubelet[699]: E0929 09:55:32.093592     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:55:33 no-preload-730717 kubelet[699]: I0929 09:55:33.092172     699 scope.go:117] "RemoveContainer" containerID="c3192cb52ef9aa645ece55ec3028d988e96eead51c78285e9cc21db84bc4b878"
	Sep 29 09:55:33 no-preload-730717 kubelet[699]: E0929 09:55:33.092397     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:55:34 no-preload-730717 kubelet[699]: E0929 09:55:34.236308     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139734235645580  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:34 no-preload-730717 kubelet[699]: E0929 09:55:34.236347     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139734235645580  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:37 no-preload-730717 kubelet[699]: E0929 09:55:37.093250     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:55:44 no-preload-730717 kubelet[699]: E0929 09:55:44.237649     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139744237398486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:44 no-preload-730717 kubelet[699]: E0929 09:55:44.237688     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139744237398486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:46 no-preload-730717 kubelet[699]: I0929 09:55:46.092161     699 scope.go:117] "RemoveContainer" containerID="c3192cb52ef9aa645ece55ec3028d988e96eead51c78285e9cc21db84bc4b878"
	Sep 29 09:55:46 no-preload-730717 kubelet[699]: E0929 09:55:46.092381     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:55:46 no-preload-730717 kubelet[699]: E0929 09:55:46.093443     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:55:49 no-preload-730717 kubelet[699]: E0929 09:55:49.093551     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:55:54 no-preload-730717 kubelet[699]: E0929 09:55:54.239363     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139754239129068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:54 no-preload-730717 kubelet[699]: E0929 09:55:54.239400     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139754239129068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:55:58 no-preload-730717 kubelet[699]: I0929 09:55:58.092282     699 scope.go:117] "RemoveContainer" containerID="c3192cb52ef9aa645ece55ec3028d988e96eead51c78285e9cc21db84bc4b878"
	Sep 29 09:55:58 no-preload-730717 kubelet[699]: E0929 09:55:58.092484     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:55:59 no-preload-730717 kubelet[699]: E0929 09:55:59.093320     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:56:04 no-preload-730717 kubelet[699]: E0929 09:56:04.094388     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d8kf7" podUID="5cf5352a-bd50-49be-812d-0483e26398c0"
	Sep 29 09:56:04 no-preload-730717 kubelet[699]: E0929 09:56:04.241361     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139764241099463  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:56:04 no-preload-730717 kubelet[699]: E0929 09:56:04.241393     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139764241099463  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:56:11 no-preload-730717 kubelet[699]: I0929 09:56:11.092244     699 scope.go:117] "RemoveContainer" containerID="c3192cb52ef9aa645ece55ec3028d988e96eead51c78285e9cc21db84bc4b878"
	Sep 29 09:56:11 no-preload-730717 kubelet[699]: E0929 09:56:11.092424     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vrtpm_kubernetes-dashboard(8a18522c-15ef-49f7-a1ee-a1867b6fd113)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vrtpm" podUID="8a18522c-15ef-49f7-a1ee-a1867b6fd113"
	Sep 29 09:56:12 no-preload-730717 kubelet[699]: E0929 09:56:12.093459     699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-42r64" podUID="345a8584-75b1-484c-b650-af1b45a8db0d"
	Sep 29 09:56:14 no-preload-730717 kubelet[699]: E0929 09:56:14.242449     699 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139774242198509  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 29 09:56:14 no-preload-730717 kubelet[699]: E0929 09:56:14.242483     699 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139774242198509  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	
	
	==> storage-provisioner [9ac0db1de5c9e7283faca5cac820b11ebe6eadf9130f1232f27003dd62509583] <==
	I0929 09:37:27.555975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:37:57.561267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bdf81a55b041d54c3b595f42b184b64b1725bce0a2b90db23eb7fd721aa16cab] <==
	W0929 09:55:49.716735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:51.719682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:51.723440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:53.726749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:53.730630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:55.734185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:55.741734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:57.744799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:57.748673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:59.751909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:55:59.756826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:01.760146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:01.764385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:03.767503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:03.772305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:05.776024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:05.779605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:07.782783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:07.786704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:09.789766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:09.793653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:11.796490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:11.801238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:13.805003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:13.809195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
E0929 09:56:15.700377  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-730717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7: exit status 1 (58.614942ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-42r64" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-d8kf7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-730717 describe pod metrics-server-746fcd58dc-42r64 kubernetes-dashboard-855c9754f9-d8kf7: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qghq7" [d0d73ee5-b7eb-4f95-a577-03315e1c1e0a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 09:48:17.093158  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:48:50.261609  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:49:01.698273  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:49:47.443006  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:50:19.892663  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:50:29.302297  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/bridge-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:51:15.700289  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:52:12.829370  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:52:18.485966  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:53:17.093461  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/calico-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:53:35.897063  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:53:41.552376  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:53:50.261229  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:54:01.698701  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 09:56:57.536582739 +0000 UTC m=+5265.183208192
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe po kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-547715 describe po kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-qghq7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-547715/192.168.85.2
Start Time:       Mon, 29 Sep 2025 09:38:16 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zr2k8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-zr2k8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                   From               Message
----     ------       ----                  ----               -------
Normal   Scheduled    18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7 to default-k8s-diff-port-547715
Warning  FailedMount  18m                   kubelet            MountVolume.SetUp failed for volume "kube-api-access-zr2k8" : configmap "kube-root-ca.crt" not found
Warning  Failed       16m (x3 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling      13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       13m (x5 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed       13m (x2 over 15m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff      3m37s (x47 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed       3m37s (x47 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard: exit status 1 (68.399199ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-qghq7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-547715 logs kubernetes-dashboard-855c9754f9-qghq7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-547715
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-547715:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39",
	        "Created": "2025-09-29T09:37:00.383172067Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 744659,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T09:38:02.888389154Z",
	            "FinishedAt": "2025-09-29T09:38:01.958756731Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/hostname",
	        "HostsPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/hosts",
	        "LogPath": "/var/lib/docker/containers/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39/0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39-json.log",
	        "Name": "/default-k8s-diff-port-547715",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-547715:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-547715",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0eca4c191c9460306a781078c6ada21fc372c0d9fd8b75581bd147083efbbb39",
	                "LowerDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b-init/diff:/var/lib/docker/overlay2/2b48de096b4f75995101626a7fbb9d151d1969fbf7a5100d1677e090e2af17f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ee8063a0cee7dec7a9803ec54e49363559b4475815b4f3f0484f2f68765651b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-547715",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-547715/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-547715",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-547715",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-547715",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "430c0c64e53b0585256ff4ae33923900e2b772a28a10909c57aa7cf6d4fa82c7",
	            "SandboxKey": "/var/run/docker/netns/430c0c64e53b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-547715": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d7:da:af:f9:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3c300966847ea321243189f9b85a3983ffa1be9c8e7a6f7878f542b39ea8eee5",
	                    "EndpointID": "38c98ff51ecb0df328c367ed9f76471369322141671140e922de0e3e1bce97d9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-547715",
	                        "0eca4c191c94"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-547715 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-547715 logs -n 25: (1.189239828s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p newest-cni-879079                                                                                                                                                     │ newest-cni-879079            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:36 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:36 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p no-preload-730717 --alsologtostderr -v=3                                                                                                                              │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ addons  │ enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ start   │ -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:37 UTC │
	│ stop    │ -p default-k8s-diff-port-547715 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:37 UTC │ 29 Sep 25 09:38 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ start   │ -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-547715 │ jenkins │ v1.37.0 │ 29 Sep 25 09:38 UTC │ 29 Sep 25 09:38 UTC │
	│ image   │ old-k8s-version-383226 image list --format=json                                                                                                                          │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ unpause │ -p old-k8s-version-383226 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-383226                                                                                                                                                │ old-k8s-version-383226       │ jenkins │ v1.37.0 │ 29 Sep 25 09:54 UTC │ 29 Sep 25 09:54 UTC │
	│ image   │ embed-certs-463478 image list --format=json                                                                                                                              │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ pause   │ -p embed-certs-463478 --alsologtostderr -v=1                                                                                                                             │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ unpause │ -p embed-certs-463478 --alsologtostderr -v=1                                                                                                                             │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ delete  │ -p embed-certs-463478                                                                                                                                                    │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ delete  │ -p embed-certs-463478                                                                                                                                                    │ embed-certs-463478           │ jenkins │ v1.37.0 │ 29 Sep 25 09:55 UTC │ 29 Sep 25 09:55 UTC │
	│ image   │ no-preload-730717 image list --format=json                                                                                                                               │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:56 UTC │ 29 Sep 25 09:56 UTC │
	│ pause   │ -p no-preload-730717 --alsologtostderr -v=1                                                                                                                              │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:56 UTC │ 29 Sep 25 09:56 UTC │
	│ unpause │ -p no-preload-730717 --alsologtostderr -v=1                                                                                                                              │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:56 UTC │ 29 Sep 25 09:56 UTC │
	│ delete  │ -p no-preload-730717                                                                                                                                                     │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:56 UTC │ 29 Sep 25 09:56 UTC │
	│ delete  │ -p no-preload-730717                                                                                                                                                     │ no-preload-730717            │ jenkins │ v1.37.0 │ 29 Sep 25 09:56 UTC │ 29 Sep 25 09:56 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 09:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 09:38:02.602451  744475 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:38:02.604572  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604588  744475 out.go:374] Setting ErrFile to fd 2...
	I0929 09:38:02.604596  744475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:38:02.604882  744475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:38:02.605487  744475 out.go:368] Setting JSON to false
	I0929 09:38:02.606828  744475 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12032,"bootTime":1759126651,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:38:02.606958  744475 start.go:140] virtualization: kvm guest
	I0929 09:38:02.608781  744475 out.go:179] * [default-k8s-diff-port-547715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:38:02.610638  744475 notify.go:220] Checking for updates...
	I0929 09:38:02.610689  744475 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:38:02.611947  744475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:38:02.613292  744475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:02.614515  744475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:38:02.615846  744475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:38:02.617298  744475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:38:02.619049  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:02.619871  744475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:38:02.651910  744475 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:38:02.652021  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.724566  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.711673677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.724736  744475 docker.go:318] overlay module found
	I0929 09:38:02.726847  744475 out.go:179] * Using the docker driver based on existing profile
	I0929 09:38:02.727965  744475 start.go:304] selected driver: docker
	I0929 09:38:02.727982  744475 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.728131  744475 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:38:02.728938  744475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:38:02.798201  744475 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:38:02.786507737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:38:02.798574  744475 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:02.798625  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:02.798695  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:02.798744  744475 start.go:348] cluster config:
	{Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:02.803960  744475 out.go:179] * Starting "default-k8s-diff-port-547715" primary control-plane node in "default-k8s-diff-port-547715" cluster
	I0929 09:38:02.805367  744475 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 09:38:02.806633  744475 out.go:179] * Pulling base image v0.0.48 ...
	I0929 09:38:02.807764  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:02.807815  744475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I0929 09:38:02.807849  744475 cache.go:58] Caching tarball of preloaded images
	I0929 09:38:02.807847  744475 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 09:38:02.807982  744475 preload.go:172] Found /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 09:38:02.808000  744475 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I0929 09:38:02.808163  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:02.832169  744475 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 09:38:02.832193  744475 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 09:38:02.832223  744475 cache.go:232] Successfully downloaded all kic artifacts
	I0929 09:38:02.832255  744475 start.go:360] acquireMachinesLock for default-k8s-diff-port-547715: {Name:mkef8140f377b4de895c8571ff44e24be4754e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 09:38:02.832319  744475 start.go:364] duration metric: took 42.901µs to acquireMachinesLock for "default-k8s-diff-port-547715"
	I0929 09:38:02.832343  744475 start.go:96] Skipping create...Using existing machine configuration
	I0929 09:38:02.832351  744475 fix.go:54] fixHost starting: 
	I0929 09:38:02.832639  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:02.854072  744475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-547715: state=Stopped err=<nil>
	W0929 09:38:02.854102  744475 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 09:38:02.225099  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:04.724187  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	W0929 09:38:06.724381  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:02.857616  744475 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-547715" ...
	I0929 09:38:02.857727  744475 cli_runner.go:164] Run: docker start default-k8s-diff-port-547715
	I0929 09:38:03.156711  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:03.180888  744475 kic.go:430] container "default-k8s-diff-port-547715" state is running.
	I0929 09:38:03.181888  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:03.203574  744475 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/config.json ...
	I0929 09:38:03.203810  744475 machine.go:93] provisionDockerMachine start ...
	I0929 09:38:03.203918  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:03.225450  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:03.225788  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:03.225809  744475 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 09:38:03.226519  744475 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33470->127.0.0.1:33506: read: connection reset by peer
	I0929 09:38:06.363220  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.363248  744475 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-547715"
	I0929 09:38:06.363324  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.381317  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.381536  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.381550  744475 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-547715 && echo "default-k8s-diff-port-547715" | sudo tee /etc/hostname
	I0929 09:38:06.531735  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-547715
	
	I0929 09:38:06.531842  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.549948  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:06.550236  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:06.550256  744475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-547715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-547715/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-547715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 09:38:06.685613  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 09:38:06.685649  744475 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21650-382648/.minikube CaCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21650-382648/.minikube}
	I0929 09:38:06.685684  744475 ubuntu.go:190] setting up certificates
	I0929 09:38:06.685695  744475 provision.go:84] configureAuth start
	I0929 09:38:06.685750  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:06.704839  744475 provision.go:143] copyHostCerts
	I0929 09:38:06.704915  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem, removing ...
	I0929 09:38:06.704934  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem
	I0929 09:38:06.705006  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/key.pem (1679 bytes)
	I0929 09:38:06.705139  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem, removing ...
	I0929 09:38:06.705152  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem
	I0929 09:38:06.705182  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/ca.pem (1082 bytes)
	I0929 09:38:06.705261  744475 exec_runner.go:144] found /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem, removing ...
	I0929 09:38:06.705269  744475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem
	I0929 09:38:06.705295  744475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21650-382648/.minikube/cert.pem (1123 bytes)
	I0929 09:38:06.705471  744475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-547715 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-547715 localhost minikube]
	I0929 09:38:06.863319  744475 provision.go:177] copyRemoteCerts
	I0929 09:38:06.863393  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 09:38:06.863443  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:06.882627  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:06.979437  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 09:38:07.004710  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 09:38:07.029798  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 09:38:07.054802  744475 provision.go:87] duration metric: took 369.089658ms to configureAuth
	I0929 09:38:07.054846  744475 ubuntu.go:206] setting minikube options for container-runtime
	I0929 09:38:07.055025  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:07.055152  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.073937  744475 main.go:141] libmachine: Using SSH client type: native
	I0929 09:38:07.074181  744475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0929 09:38:07.074200  744475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 09:38:07.357669  744475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 09:38:07.357696  744475 machine.go:96] duration metric: took 4.15386954s to provisionDockerMachine
	I0929 09:38:07.357709  744475 start.go:293] postStartSetup for "default-k8s-diff-port-547715" (driver="docker")
	I0929 09:38:07.357723  744475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 09:38:07.357795  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 09:38:07.357864  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.376587  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.473948  744475 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 09:38:07.477599  744475 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 09:38:07.477638  744475 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 09:38:07.477651  744475 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 09:38:07.477659  744475 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 09:38:07.477675  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/addons for local assets ...
	I0929 09:38:07.477729  744475 filesync.go:126] Scanning /home/jenkins/minikube-integration/21650-382648/.minikube/files for local assets ...
	I0929 09:38:07.477798  744475 filesync.go:149] local asset: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem -> 3862252.pem in /etc/ssl/certs
	I0929 09:38:07.477941  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 09:38:07.487030  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:07.511935  744475 start.go:296] duration metric: took 154.207911ms for postStartSetup
	I0929 09:38:07.512029  744475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:38:07.512065  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.530146  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.622415  744475 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 09:38:07.627142  744475 fix.go:56] duration metric: took 4.794784277s for fixHost
	I0929 09:38:07.627172  744475 start.go:83] releasing machines lock for "default-k8s-diff-port-547715", held for 4.794838826s
	I0929 09:38:07.627231  744475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-547715
	I0929 09:38:07.645874  744475 ssh_runner.go:195] Run: cat /version.json
	I0929 09:38:07.645918  744475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 09:38:07.645945  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.645972  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:07.664991  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.665181  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:07.828453  744475 ssh_runner.go:195] Run: systemctl --version
	I0929 09:38:07.833549  744475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 09:38:07.976610  744475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 09:38:07.981640  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:07.991646  744475 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 09:38:07.991738  744475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 09:38:08.001522  744475 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 09:38:08.001550  744475 start.go:495] detecting cgroup driver to use...
	I0929 09:38:08.001586  744475 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 09:38:08.001645  744475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 09:38:08.014507  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 09:38:08.026523  744475 docker.go:218] disabling cri-docker service (if available) ...
	I0929 09:38:08.026594  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 09:38:08.040674  744475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 09:38:08.052914  744475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 09:38:08.121663  744475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 09:38:08.190873  744475 docker.go:234] disabling docker service ...
	I0929 09:38:08.190996  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 09:38:08.203929  744475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 09:38:08.215853  744475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 09:38:08.282230  744475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 09:38:08.347410  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 09:38:08.359320  744475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 09:38:08.376309  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:08.524854  744475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 09:38:08.524933  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.536486  744475 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 09:38:08.536545  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.547317  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.557769  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.568183  744475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 09:38:08.578182  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.588665  744475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.598857  744475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 09:38:08.609520  744475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 09:38:08.618464  744475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 09:38:08.627869  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:08.694951  744475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 09:38:08.976752  744475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 09:38:08.976819  744475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 09:38:08.980869  744475 start.go:563] Will wait 60s for crictl version
	I0929 09:38:08.980932  744475 ssh_runner.go:195] Run: which crictl
	I0929 09:38:08.984701  744475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 09:38:09.019500  744475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 09:38:09.019620  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.055087  744475 ssh_runner.go:195] Run: crio --version
	I0929 09:38:09.091964  744475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.24.6 ...
	W0929 09:38:08.724626  739826 pod_ready.go:104] pod "coredns-66bc5c9577-ncwp4" is not "Ready", error: <nil>
	I0929 09:38:09.223924  739826 pod_ready.go:94] pod "coredns-66bc5c9577-ncwp4" is "Ready"
	I0929 09:38:09.224002  739826 pod_ready.go:86] duration metric: took 41.005435401s for pod "coredns-66bc5c9577-ncwp4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.226573  739826 pod_ready.go:83] waiting for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.230177  739826 pod_ready.go:94] pod "etcd-no-preload-730717" is "Ready"
	I0929 09:38:09.230196  739826 pod_ready.go:86] duration metric: took 3.600648ms for pod "etcd-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.232019  739826 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.235556  739826 pod_ready.go:94] pod "kube-apiserver-no-preload-730717" is "Ready"
	I0929 09:38:09.235574  739826 pod_ready.go:86] duration metric: took 3.535675ms for pod "kube-apiserver-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.237200  739826 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.422451  739826 pod_ready.go:94] pod "kube-controller-manager-no-preload-730717" is "Ready"
	I0929 09:38:09.422486  739826 pod_ready.go:86] duration metric: took 185.263743ms for pod "kube-controller-manager-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:09.623052  739826 pod_ready.go:83] waiting for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.022664  739826 pod_ready.go:94] pod "kube-proxy-4bmgw" is "Ready"
	I0929 09:38:10.022689  739826 pod_ready.go:86] duration metric: took 399.612543ms for pod "kube-proxy-4bmgw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.224443  739826 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622809  739826 pod_ready.go:94] pod "kube-scheduler-no-preload-730717" is "Ready"
	I0929 09:38:10.622852  739826 pod_ready.go:86] duration metric: took 398.374387ms for pod "kube-scheduler-no-preload-730717" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:10.622869  739826 pod_ready.go:40] duration metric: took 42.407933129s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:10.670550  739826 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:10.673808  739826 out.go:179] * Done! kubectl is now configured to use "no-preload-730717" cluster and "default" namespace by default
	I0929 09:38:09.093120  744475 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-547715 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 09:38:09.111264  744475 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 09:38:09.115466  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:09.127999  744475 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 09:38:09.128194  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.274999  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.416048  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.554074  744475 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I0929 09:38:09.554387  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.693270  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.833942  744475 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I0929 09:38:09.976460  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.021351  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.021374  744475 crio.go:433] Images already preloaded, skipping extraction
	I0929 09:38:10.021423  744475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 09:38:10.057863  744475 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 09:38:10.057891  744475 cache_images.go:85] Images are preloaded, skipping loading
	I0929 09:38:10.057901  744475 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I0929 09:38:10.058037  744475 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-547715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 09:38:10.058111  744475 ssh_runner.go:195] Run: crio config
	I0929 09:38:10.102165  744475 cni.go:84] Creating CNI manager for ""
	I0929 09:38:10.102193  744475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 09:38:10.102207  744475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 09:38:10.102236  744475 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-547715 NodeName:default-k8s-diff-port-547715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 09:38:10.102404  744475 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-547715"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 09:38:10.102481  744475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I0929 09:38:10.112188  744475 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 09:38:10.112255  744475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 09:38:10.121661  744475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0929 09:38:10.140487  744475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 09:38:10.160494  744475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0929 09:38:10.179722  744475 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 09:38:10.183977  744475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 09:38:10.196126  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.262691  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.292254  744475 certs.go:68] Setting up /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715 for IP: 192.168.85.2
	I0929 09:38:10.292283  744475 certs.go:194] generating shared ca certs ...
	I0929 09:38:10.292301  744475 certs.go:226] acquiring lock for ca certs: {Name:mk8a4c381001df08f9d08f1ae1a1b7d9c5716fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.292443  744475 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key
	I0929 09:38:10.292483  744475 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key
	I0929 09:38:10.292493  744475 certs.go:256] generating profile certs ...
	I0929 09:38:10.292592  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/client.key
	I0929 09:38:10.292649  744475 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key.78d67a41
	I0929 09:38:10.292690  744475 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key
	I0929 09:38:10.292789  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem (1338 bytes)
	W0929 09:38:10.292816  744475 certs.go:480] ignoring /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225_empty.pem, impossibly tiny 0 bytes
	I0929 09:38:10.292825  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 09:38:10.292877  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/ca.pem (1082 bytes)
	I0929 09:38:10.292902  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/cert.pem (1123 bytes)
	I0929 09:38:10.292924  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/certs/key.pem (1679 bytes)
	I0929 09:38:10.292963  744475 certs.go:484] found cert: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem (1708 bytes)
	I0929 09:38:10.293652  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 09:38:10.320976  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 09:38:10.349012  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 09:38:10.381487  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 09:38:10.406553  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 09:38:10.432469  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 09:38:10.458734  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 09:38:10.483339  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/default-k8s-diff-port-547715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 09:38:10.508019  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/ssl/certs/3862252.pem --> /usr/share/ca-certificates/3862252.pem (1708 bytes)
	I0929 09:38:10.533382  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 09:38:10.558362  744475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21650-382648/.minikube/certs/386225.pem --> /usr/share/ca-certificates/386225.pem (1338 bytes)
	I0929 09:38:10.583377  744475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 09:38:10.602070  744475 ssh_runner.go:195] Run: openssl version
	I0929 09:38:10.607660  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3862252.pem && ln -fs /usr/share/ca-certificates/3862252.pem /etc/ssl/certs/3862252.pem"
	I0929 09:38:10.617911  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622307  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 08:48 /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.622354  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3862252.pem
	I0929 09:38:10.629918  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3862252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 09:38:10.640804  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 09:38:10.651151  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655258  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.655316  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 09:38:10.662603  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 09:38:10.672822  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/386225.pem && ln -fs /usr/share/ca-certificates/386225.pem /etc/ssl/certs/386225.pem"
	I0929 09:38:10.683319  744475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687277  744475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 08:48 /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.687348  744475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/386225.pem
	I0929 09:38:10.696079  744475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/386225.pem /etc/ssl/certs/51391683.0"
	I0929 09:38:10.707660  744475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 09:38:10.711977  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 09:38:10.719705  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 09:38:10.727227  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 09:38:10.734938  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 09:38:10.742331  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 09:38:10.750000  744475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 09:38:10.758994  744475 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-547715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-547715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 09:38:10.759111  744475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 09:38:10.759156  744475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 09:38:10.801701  744475 cri.go:89] found id: ""
	I0929 09:38:10.801777  744475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 09:38:10.814003  744475 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 09:38:10.814030  744475 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 09:38:10.814082  744475 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 09:38:10.825280  744475 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:38:10.826421  744475 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-547715" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.827379  744475 kubeconfig.go:62] /home/jenkins/minikube-integration/21650-382648/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-547715" cluster setting kubeconfig missing "default-k8s-diff-port-547715" context setting]
	I0929 09:38:10.828702  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.830983  744475 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 09:38:10.843171  744475 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0929 09:38:10.843214  744475 kubeadm.go:593] duration metric: took 29.177344ms to restartPrimaryControlPlane
	I0929 09:38:10.843227  744475 kubeadm.go:394] duration metric: took 84.244515ms to StartCluster
	I0929 09:38:10.843248  744475 settings.go:142] acquiring lock: {Name:mk081a1135807bae44e38ca9ea22cde104c57502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.843363  744475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:38:10.845603  744475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/kubeconfig: {Name:mkd31289f2a83f9fd9558ce53615fcd149a450b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 09:38:10.846384  744475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 09:38:10.846454  744475 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 09:38:10.846542  744475 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846565  744475 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846574  744475 addons.go:247] addon storage-provisioner should already be in state true
	I0929 09:38:10.846575  744475 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846596  744475 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846614  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846620  744475 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:10.846621  744475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-547715"
	I0929 09:38:10.846618  744475 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-547715"
	I0929 09:38:10.846630  744475 config.go:182] Loaded profile config "default-k8s-diff-port-547715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:38:10.846642  744475 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.846656  744475 addons.go:247] addon dashboard should already be in state true
	W0929 09:38:10.846631  744475 addons.go:247] addon metrics-server should already be in state true
	I0929 09:38:10.846681  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846697  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.846974  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847135  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847150  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.847155  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.848072  744475 out.go:179] * Verifying Kubernetes components...
	I0929 09:38:10.849415  744475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 09:38:10.877953  744475 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 09:38:10.877980  744475 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 09:38:10.878525  744475 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-547715"
	W0929 09:38:10.878545  744475 addons.go:247] addon default-storageclass should already be in state true
	I0929 09:38:10.878575  744475 host.go:66] Checking if "default-k8s-diff-port-547715" exists ...
	I0929 09:38:10.879047  744475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-547715 --format={{.State.Status}}
	I0929 09:38:10.879403  744475 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 09:38:10.879439  744475 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:10.879448  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 09:38:10.879475  744475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 09:38:10.879548  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.879454  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 09:38:10.879612  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.883150  744475 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 09:38:10.884341  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 09:38:10.884361  744475 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 09:38:10.884428  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.910318  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.910796  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.911948  744475 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:10.911964  744475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 09:38:10.912016  744475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-547715
	I0929 09:38:10.914592  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.935385  744475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/default-k8s-diff-port-547715/id_rsa Username:docker}
	I0929 09:38:10.956363  744475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 09:38:10.989150  744475 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:11.038321  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:11.042162  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 09:38:11.042187  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 09:38:11.047218  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 09:38:11.047242  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 09:38:11.070239  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.072804  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 09:38:11.072828  744475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 09:38:11.078863  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 09:38:11.078893  744475 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 09:38:11.104886  744475 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.104914  744475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 09:38:11.110131  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 09:38:11.110158  744475 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 09:38:11.142191  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 09:38:11.142219  744475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 09:38:11.148094  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.148238  744475 retry.go:31] will retry after 359.205678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.151384  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 09:38:11.179885  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 09:38:11.179923  744475 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 09:38:11.182481  744475 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.182514  744475 retry.go:31] will retry after 316.417959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 09:38:11.208649  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 09:38:11.208682  744475 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 09:38:11.232655  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 09:38:11.232724  744475 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 09:38:11.252807  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 09:38:11.252860  744475 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 09:38:11.272945  744475 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.272972  744475 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 09:38:11.292603  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 09:38:11.499678  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 09:38:11.508207  744475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 09:38:12.841081  744475 node_ready.go:49] node "default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:12.841123  744475 node_ready.go:38] duration metric: took 1.85187108s for node "default-k8s-diff-port-547715" to be "Ready" ...
	I0929 09:38:12.841142  744475 api_server.go:52] waiting for apiserver process to appear ...
	I0929 09:38:12.841200  744475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:38:13.424995  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.273447364s)
	I0929 09:38:13.425060  744475 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-547715"
	I0929 09:38:13.425163  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.132513063s)
	I0929 09:38:13.425661  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.925949942s)
	I0929 09:38:13.425900  744475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.917662767s)
	I0929 09:38:13.426006  744475 api_server.go:72] duration metric: took 2.57958819s to wait for apiserver process to appear ...
	I0929 09:38:13.426024  744475 api_server.go:88] waiting for apiserver healthz status ...
	I0929 09:38:13.426045  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.427072  744475 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-547715 addons enable metrics-server
	
	I0929 09:38:13.431499  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.431522  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:13.435572  744475 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0929 09:38:13.436883  744475 addons.go:514] duration metric: took 2.590443822s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0929 09:38:13.926913  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:13.932318  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 09:38:13.932348  744475 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 09:38:14.426994  744475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0929 09:38:14.431739  744475 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0929 09:38:14.432753  744475 api_server.go:141] control plane version: v1.34.1
	I0929 09:38:14.432785  744475 api_server.go:131] duration metric: took 1.006754243s to wait for apiserver health ...
	I0929 09:38:14.432798  744475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 09:38:14.435903  744475 system_pods.go:59] 9 kube-system pods found
	I0929 09:38:14.435952  744475 system_pods.go:61] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.435967  744475 system_pods.go:61] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.435982  744475 system_pods.go:61] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.435998  744475 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.436014  744475 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.436023  744475 system_pods.go:61] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.436033  744475 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.436045  744475 system_pods.go:61] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.436053  744475 system_pods.go:61] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.436063  744475 system_pods.go:74] duration metric: took 3.257318ms to wait for pod list to return data ...
	I0929 09:38:14.436077  744475 default_sa.go:34] waiting for default service account to be created ...
	I0929 09:38:14.438271  744475 default_sa.go:45] found service account: "default"
	I0929 09:38:14.438293  744475 default_sa.go:55] duration metric: took 2.206178ms for default service account to be created ...
	I0929 09:38:14.438304  744475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 09:38:14.441520  744475 system_pods.go:86] 9 kube-system pods found
	I0929 09:38:14.441555  744475 system_pods.go:89] "coredns-66bc5c9577-szmnf" [5e29763c-c6ef-438a-9f93-50e23e7d7719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 09:38:14.441569  744475 system_pods.go:89] "etcd-default-k8s-diff-port-547715" [747d98ee-01d7-435b-b534-68726acc9b6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 09:38:14.441583  744475 system_pods.go:89] "kindnet-z4khf" [21e1056d-6b8b-4f52-87a4-0697d33a8118] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0929 09:38:14.441591  744475 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-547715" [a774ed96-0fbe-4e3e-9337-da0ec0f7218c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 09:38:14.441606  744475 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-547715" [ab0faaa2-c66f-4970-95f5-e9c70617da5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 09:38:14.441613  744475 system_pods.go:89] "kube-proxy-tklgn" [8baf19ff-14de-4fa2-a98f-5430a05e4d14] Running
	I0929 09:38:14.441622  744475 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-547715" [63d3de84-296e-42b5-9a46-b062536ba5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 09:38:14.441633  744475 system_pods.go:89] "metrics-server-746fcd58dc-lh9zv" [4dd3d308-ff96-4085-9bc5-05d915186915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 09:38:14.441641  744475 system_pods.go:89] "storage-provisioner" [f920f3bf-4fcd-4ba8-80da-ce5fd48a56b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 09:38:14.441654  744475 system_pods.go:126] duration metric: took 3.342797ms to wait for k8s-apps to be running ...
	I0929 09:38:14.441667  744475 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 09:38:14.441718  744475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:38:14.457198  744475 system_svc.go:56] duration metric: took 15.510885ms WaitForService to wait for kubelet
	I0929 09:38:14.457234  744475 kubeadm.go:578] duration metric: took 3.610818298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 09:38:14.457257  744475 node_conditions.go:102] verifying NodePressure condition ...
	I0929 09:38:14.460508  744475 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 09:38:14.460534  744475 node_conditions.go:123] node cpu capacity is 8
	I0929 09:38:14.460550  744475 node_conditions.go:105] duration metric: took 3.287088ms to run NodePressure ...
	I0929 09:38:14.460566  744475 start.go:241] waiting for startup goroutines ...
	I0929 09:38:14.460575  744475 start.go:246] waiting for cluster config update ...
	I0929 09:38:14.460591  744475 start.go:255] writing updated cluster config ...
	I0929 09:38:14.461011  744475 ssh_runner.go:195] Run: rm -f paused
	I0929 09:38:14.465262  744475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:14.469249  744475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 09:38:16.474616  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:18.974817  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:21.474679  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:23.974653  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:25.974904  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:27.975234  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:30.474414  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:32.475244  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:34.975746  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:37.474689  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:39.974324  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:42.474794  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:44.476364  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:46.974499  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:49.474657  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	W0929 09:38:51.474940  744475 pod_ready.go:104] pod "coredns-66bc5c9577-szmnf" is not "Ready", error: <nil>
	I0929 09:38:52.974403  744475 pod_ready.go:94] pod "coredns-66bc5c9577-szmnf" is "Ready"
	I0929 09:38:52.974429  744475 pod_ready.go:86] duration metric: took 38.50515659s for pod "coredns-66bc5c9577-szmnf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.977032  744475 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.980878  744475 pod_ready.go:94] pod "etcd-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.980904  744475 pod_ready.go:86] duration metric: took 3.847603ms for pod "etcd-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.982681  744475 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.986175  744475 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:52.986196  744475 pod_ready.go:86] duration metric: took 3.493752ms for pod "kube-apiserver-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:52.988006  744475 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.172805  744475 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:53.172860  744475 pod_ready.go:86] duration metric: took 184.829323ms for pod "kube-controller-manager-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.372987  744475 pod_ready.go:83] waiting for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.772398  744475 pod_ready.go:94] pod "kube-proxy-tklgn" is "Ready"
	I0929 09:38:53.772428  744475 pod_ready.go:86] duration metric: took 399.413461ms for pod "kube-proxy-tklgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:53.972993  744475 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373344  744475 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-547715" is "Ready"
	I0929 09:38:54.373370  744475 pod_ready.go:86] duration metric: took 400.353446ms for pod "kube-scheduler-default-k8s-diff-port-547715" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 09:38:54.373382  744475 pod_ready.go:40] duration metric: took 39.908092821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 09:38:54.420218  744475 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I0929 09:38:54.422092  744475 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-547715" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 09:55:30 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:30.382650888Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=747b637a-1972-4bea-b6a7-3b69c8ca1181 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:41 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:41.381299566Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=960fc8e1-1323-4302-aec2-4429735797c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:41 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:41.381335821Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=51bc89a8-1c5e-4690-93ae-8ff0fab5dde6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:41 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:41.381502361Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=960fc8e1-1323-4302-aec2-4429735797c9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:41 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:41.381593077Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=51bc89a8-1c5e-4690-93ae-8ff0fab5dde6 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:55 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:55.381694437Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8b8f64ba-a988-4fef-905f-22257e890602 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:55 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:55.382094622Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8b8f64ba-a988-4fef-905f-22257e890602 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:56 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:56.381614651Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7ad07e97-681d-49a0-967d-36fc042c7a6a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:55:56 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:55:56.381926476Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7ad07e97-681d-49a0-967d-36fc042c7a6a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:07 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:07.381324896Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b9de5722-0b60-475c-9992-6758453b6987 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:07 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:07.381525985Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b9de5722-0b60-475c-9992-6758453b6987 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:10 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:10.382432307Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=098ff63f-09f4-4eb1-afd6-ae91da1a3553 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:10 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:10.382756193Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=098ff63f-09f4-4eb1-afd6-ae91da1a3553 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:20 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:20.382421581Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a1d132d2-be71-4822-bcfe-3fbf15e4e8fc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:20 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:20.382705395Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a1d132d2-be71-4822-bcfe-3fbf15e4e8fc name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:24 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:24.381759460Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1db7a93c-4944-42d7-ac28-739cbd479b19 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:24 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:24.382109845Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1db7a93c-4944-42d7-ac28-739cbd479b19 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:33 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:33.382036291Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c61b7488-fe2b-4d94-9627-7b6cf8bddf5a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:33 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:33.382288358Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c61b7488-fe2b-4d94-9627-7b6cf8bddf5a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:38.381969401Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=607828e2-0e2e-47be-8316-f58d8fda2a58 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:38 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:38.382295983Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=607828e2-0e2e-47be-8316-f58d8fda2a58 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:46 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:46.381425047Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=298db012-8fc1-4b3b-979a-60b468c1ae37 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:46 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:46.381705384Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=298db012-8fc1-4b3b-979a-60b468c1ae37 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:49 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:49.381095131Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e3bad1cc-7aab-4aa9-9cbc-50a717f388b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 09:56:49 default-k8s-diff-port-547715 crio[548]: time="2025-09-29 09:56:49.381360488Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e3bad1cc-7aab-4aa9-9cbc-50a717f388b4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6ecbb463c1ee5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   578491b028ba5       dashboard-metrics-scraper-6ffb444bf9-dtdv9
	0766e166e039f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   a7999b6883608       storage-provisioner
	282e1eb9eb159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   a7999b6883608       storage-provisioner
	70ab7f2e8b6b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   a74cb12ec9d60       kindnet-z4khf
	a47379e268889       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   18 minutes ago      Running             kube-proxy                  1                   85c62924ae93b       kube-proxy-tklgn
	6c9e0e8b13ca0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   016129f11f4d9       busybox
	a6f58acf91e8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   7e3fdbc819f2d       coredns-66bc5c9577-szmnf
	c9d70defb42b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 minutes ago      Running             kube-controller-manager     1                   6ef35ee579036       kube-controller-manager-default-k8s-diff-port-547715
	8722901e90377       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   3088452426d15       etcd-default-k8s-diff-port-547715
	c22423ef78077       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   18 minutes ago      Running             kube-scheduler              1                   ff1dce97f103e       kube-scheduler-default-k8s-diff-port-547715
	08e72b4f4dd8f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 minutes ago      Running             kube-apiserver              1                   677868a092b75       kube-apiserver-default-k8s-diff-port-547715
	
	
	==> coredns [a6f58acf91e8c557df13d6f3b1c4d00d883fa9cb0aa3a69b6ade22bdc2b28a85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40022 - 54057 "HINFO IN 2772210620304821818.450426464391418620. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.508985837s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-547715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-547715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a178e40af38e700f0d249a115e87529cf130fe78
	                    minikube.k8s.io/name=default-k8s-diff-port-547715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T09_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 09:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-547715
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 09:56:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 09:54:09 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 09:54:09 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 09:54:09 +0000   Mon, 29 Sep 2025 09:37:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 09:54:09 +0000   Mon, 29 Sep 2025 09:37:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-547715
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 4af6616ddbe04b1cbf75fc7b220ec352
	  System UUID:                31732521-1976-40d5-9acb-3d42efd87ef5
	  Boot ID:                    f6798896-741e-40b5-b5fd-284943eb7fde
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-szmnf                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-547715                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-z4khf                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-547715             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-547715    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-tklgn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-547715             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-lh9zv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dtdv9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qghq7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-547715 event: Registered Node default-k8s-diff-port-547715 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-547715 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-547715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-547715 event: Registered Node default-k8s-diff-port-547715 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +24.116183] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[ +13.219794] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 36 70 5c 70 56 08 06
	[  +0.000365] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da e2 84 76 8f 1a 08 06
	[Sep29 09:34] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 d0 49 6d e5 00 08 06
	[  +0.000572] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d6 88 3f 66 bb 08 06
	[ +31.077955] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[  +7.090917] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ee a6 ac d9 7a 08 06
	[  +0.048507] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	[Sep29 09:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 9c 10 70 fc bc 08 06
	[  +0.000395] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 3c 0c e2 9f 43 08 06
	[ +35.403219] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b6 f0 eb 9a e4 7a 08 06
	[  +0.000378] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ff 2a 07 3f fc 08 06
	
	
	==> etcd [8722901e903773ba6c1b9b5c28a8383e30f3def513e7ad9bee0cfe8009efc6b5] <==
	{"level":"warn","ts":"2025-09-29T09:38:12.275882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.282763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.291895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.298974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.305226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.312289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.319152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.325353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.332891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.339630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.347050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.354204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.360756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.368343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.374861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.388331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.395270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.402241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T09:38:12.452257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T09:48:11.941027Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1027}
	{"level":"info","ts":"2025-09-29T09:48:11.959891Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1027,"took":"18.53092ms","hash":2073101736,"current-db-size-bytes":3219456,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1314816,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-29T09:48:11.959942Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2073101736,"revision":1027,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T09:53:11.946077Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1305}
	{"level":"info","ts":"2025-09-29T09:53:11.949098Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1305,"took":"2.680692ms","hash":1119496689,"current-db-size-bytes":3219456,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T09:53:11.949133Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1119496689,"revision":1305,"compact-revision":1027}
	
	
	==> kernel <==
	 09:56:58 up  3:39,  0 users,  load average: 0.43, 0.48, 0.94
	Linux default-k8s-diff-port-547715 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [70ab7f2e8b6b84aa37a04955ea2785244f756f8668fb64a5a78ea9bcd3e77081] <==
	I0929 09:54:54.460959       1 main.go:301] handling current node
	I0929 09:55:04.457307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:04.457356       1 main.go:301] handling current node
	I0929 09:55:14.461905       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:14.461942       1 main.go:301] handling current node
	I0929 09:55:24.457894       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:24.457936       1 main.go:301] handling current node
	I0929 09:55:34.456910       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:34.456979       1 main.go:301] handling current node
	I0929 09:55:44.457407       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:44.457457       1 main.go:301] handling current node
	I0929 09:55:54.461912       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:55:54.461954       1 main.go:301] handling current node
	I0929 09:56:04.458911       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:04.458962       1 main.go:301] handling current node
	I0929 09:56:14.465910       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:14.465952       1 main.go:301] handling current node
	I0929 09:56:24.458982       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:24.459032       1 main.go:301] handling current node
	I0929 09:56:34.459101       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:34.459137       1 main.go:301] handling current node
	I0929 09:56:44.457403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:44.457468       1 main.go:301] handling current node
	I0929 09:56:54.456758       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0929 09:56:54.456799       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08e72b4f4dd8fd8797c4e2563f468f51c972eede4a8dc3bdcba373efd8b0050e] <==
	E0929 09:53:13.868706       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:53:13.868724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 09:53:13.868770       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:53:13.869864       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:54:13.869155       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:54:13.869220       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:54:13.869239       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:54:13.870185       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:54:13.870285       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:54:13.870298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:56:13.869948       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:56:13.870047       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 09:56:13.870068       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 09:56:13.871110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 09:56:13.871210       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 09:56:13.871225       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c9d70defb42b6e720cfd1a1950b64416aea93b0f960ed6cb8d3001ef3db070f0] <==
	I0929 09:50:46.459999       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:16.379798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:16.466640       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:51:46.385051       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:51:46.473631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:16.389773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:16.480875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:52:46.393691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:52:46.487278       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:16.398007       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:16.493507       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:53:46.402293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:53:46.501035       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:16.406581       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:16.508196       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:54:46.410966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:54:46.515338       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:55:16.415054       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:55:16.522328       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:55:46.418984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:55:46.529290       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:56:16.423474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:56:16.536614       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 09:56:46.427673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 09:56:46.543353       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a47379e268889af5f827113214e1ef4563e0a019658984b85108534407ffeebe] <==
	I0929 09:38:14.145734       1 server_linux.go:53] "Using iptables proxy"
	I0929 09:38:14.210575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 09:38:14.311453       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 09:38:14.311494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 09:38:14.311599       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 09:38:14.329358       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 09:38:14.329407       1 server_linux.go:132] "Using iptables Proxier"
	I0929 09:38:14.334585       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 09:38:14.335126       1 server.go:527] "Version info" version="v1.34.1"
	I0929 09:38:14.335160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:38:14.336746       1 config.go:200] "Starting service config controller"
	I0929 09:38:14.336772       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 09:38:14.336803       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 09:38:14.336808       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 09:38:14.336996       1 config.go:106] "Starting endpoint slice config controller"
	I0929 09:38:14.337019       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 09:38:14.337051       1 config.go:309] "Starting node config controller"
	I0929 09:38:14.337066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 09:38:14.337073       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 09:38:14.437396       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 09:38:14.437394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 09:38:14.437594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c22423ef78077ac2bf7ffed8f5b51a4238c30f39b0767c047837122c5b00b85f] <==
	I0929 09:38:11.781432       1 serving.go:386] Generated self-signed cert in-memory
	W0929 09:38:12.844960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 09:38:12.844999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0929 09:38:12.845012       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 09:38:12.845021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 09:38:12.877438       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I0929 09:38:12.877600       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 09:38:12.880984       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:38:12.881043       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 09:38:12.881280       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 09:38:12.881350       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 09:38:12.981267       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 09:56:10 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:10.383061     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:56:10 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:10.521152     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139770520909449  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:10 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:10.521183     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139770520909449  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:12 default-k8s-diff-port-547715 kubelet[696]: I0929 09:56:12.381197     696 scope.go:117] "RemoveContainer" containerID="6ecbb463c1ee5dcd084d8b251df0de38437a216691882cba7ef42143e89f93da"
	Sep 29 09:56:12 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:12.381464     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:56:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:20.383060     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:56:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:20.523190     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139780522954241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:20 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:20.523227     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139780522954241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:24 default-k8s-diff-port-547715 kubelet[696]: I0929 09:56:24.381068     696 scope.go:117] "RemoveContainer" containerID="6ecbb463c1ee5dcd084d8b251df0de38437a216691882cba7ef42143e89f93da"
	Sep 29 09:56:24 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:24.381306     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:56:24 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:24.382438     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:56:30 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:30.524492     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139790524262480  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:30 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:30.524532     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139790524262480  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:33 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:33.382656     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:56:38 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:38.382638     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:56:39 default-k8s-diff-port-547715 kubelet[696]: I0929 09:56:39.381509     696 scope.go:117] "RemoveContainer" containerID="6ecbb463c1ee5dcd084d8b251df0de38437a216691882cba7ef42143e89f93da"
	Sep 29 09:56:39 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:39.381719     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	Sep 29 09:56:40 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:40.526082     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139800525860048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:40 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:40.526122     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139800525860048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:46 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:46.382044     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-lh9zv" podUID="4dd3d308-ff96-4085-9bc5-05d915186915"
	Sep 29 09:56:49 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:49.381715     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qghq7" podUID="d0d73ee5-b7eb-4f95-a577-03315e1c1e0a"
	Sep 29 09:56:50 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:50.528078     696 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759139810527798163  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:50 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:50.528115     696 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759139810527798163  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 29 09:56:54 default-k8s-diff-port-547715 kubelet[696]: I0929 09:56:54.381007     696 scope.go:117] "RemoveContainer" containerID="6ecbb463c1ee5dcd084d8b251df0de38437a216691882cba7ef42143e89f93da"
	Sep 29 09:56:54 default-k8s-diff-port-547715 kubelet[696]: E0929 09:56:54.381266     696 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtdv9_kubernetes-dashboard(12be6e28-2b06-42d9-acaf-e21b41be2e10)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtdv9" podUID="12be6e28-2b06-42d9-acaf-e21b41be2e10"
	
	
	==> storage-provisioner [0766e166e039f4881db0b03fbcd149d9896c1040d1a3696faf2a928ae77a406b] <==
	W0929 09:56:34.006241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:36.009465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:36.013669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:38.016636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:38.022369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:40.025872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:40.029704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:42.032289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:42.035951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:44.039329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:44.044265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:46.047001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:46.051716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:48.055455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:48.059569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:50.062490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:50.066504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:52.069147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:52.074110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:54.077542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:54.081140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:56.084125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:56.089045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:58.092705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 09:56:58.097074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [282e1eb9eb159f34d9a6fac10bac821f634ff7c567d7339497dbea1114cc2478] <==
	I0929 09:38:14.116633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 09:38:44.120219       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7: exit status 1 (57.913237ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-lh9zv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qghq7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-547715 describe pod metrics-server-746fcd58dc-lh9zv kubernetes-dashboard-855c9754f9-qghq7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.47s)

                                                
                                    

Test pass (280/331)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.48
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.94
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.2
21 TestBinaryMirror 0.81
22 TestOffline 86.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 409.12
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 29.49
36 TestAddons/parallel/RegistryCreds 0.62
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 6.66
42 TestAddons/parallel/Headlamp 32.44
43 TestAddons/parallel/CloudSpanner 5.5
45 TestAddons/parallel/NvidiaDevicePlugin 6.49
48 TestAddons/StoppedEnableDisable 18.51
49 TestCertOptions 34.09
50 TestCertExpiration 220.8
52 TestForceSystemdFlag 38.2
53 TestForceSystemdEnv 31.77
55 TestKVMDriverInstallOrUpdate 1.05
59 TestErrorSpam/setup 21.63
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.47
63 TestErrorSpam/unpause 1.53
64 TestErrorSpam/stop 8.05
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 42.94
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.46
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
76 TestFunctional/serial/CacheCmd/cache/add_local 1.36
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 47.06
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.04
90 TestFunctional/parallel/ConfigCmd 0.39
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.92
99 TestFunctional/parallel/AddonsCmd 0.16
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 1.77
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 1.8
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.43
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.48
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
129 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
130 TestFunctional/parallel/ImageCommands/Setup 0.94
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.22
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
138 TestFunctional/parallel/MountCmd/any-port 118.41
139 TestFunctional/parallel/MountCmd/specific-port 1.77
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
148 TestFunctional/parallel/ProfileCmd/profile_list 0.38
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 151.99
163 TestMultiControlPlane/serial/DeployApp 4.59
164 TestMultiControlPlane/serial/PingHostFromPods 1.08
165 TestMultiControlPlane/serial/AddWorkerNode 24.44
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 16.53
169 TestMultiControlPlane/serial/StopSecondaryNode 18.78
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.2
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 114.62
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.27
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 42.28
177 TestMultiControlPlane/serial/RestartCluster 53.72
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 35.59
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
184 TestJSONOutput/start/Command 69.7
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.75
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.62
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.94
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
209 TestKicCustomNetwork/create_custom_network 29.07
210 TestKicCustomNetwork/use_default_bridge_network 25.28
211 TestKicExistingNetwork 25.1
212 TestKicCustomSubnet 25.13
213 TestKicStaticIP 24.7
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 48.84
218 TestMountStart/serial/StartWithMountFirst 5.4
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 5.1
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.67
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.6
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 96.01
230 TestMultiNode/serial/DeployApp2Nodes 3.58
231 TestMultiNode/serial/PingHostFrom2Pods 0.76
232 TestMultiNode/serial/AddNode 24.34
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.63
235 TestMultiNode/serial/CopyFile 9.36
236 TestMultiNode/serial/StopNode 2.25
237 TestMultiNode/serial/StartAfterStop 7.4
238 TestMultiNode/serial/RestartKeepsNodes 81.78
239 TestMultiNode/serial/DeleteNode 5.21
240 TestMultiNode/serial/StopMultiNode 28.55
241 TestMultiNode/serial/RestartMultiNode 48.15
242 TestMultiNode/serial/ValidateNameConflict 24.72
247 TestPreload 112.11
249 TestScheduledStopUnix 97.98
252 TestInsufficientStorage 9.68
253 TestRunningBinaryUpgrade 46.19
255 TestKubernetesUpgrade 302.72
256 TestMissingContainerUpgrade 74.63
258 TestStoppedBinaryUpgrade/Setup 0.55
261 TestStoppedBinaryUpgrade/Upgrade 59.99
266 TestNetworkPlugins/group/false 10.07
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
279 TestPause/serial/Start 44.67
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
282 TestNoKubernetes/serial/StartWithK8s 25.26
283 TestNoKubernetes/serial/StartWithStopK8s 17.12
284 TestPause/serial/SecondStartNoReconfiguration 7.23
285 TestPause/serial/Pause 0.66
286 TestPause/serial/VerifyStatus 0.31
287 TestPause/serial/Unpause 0.65
288 TestPause/serial/PauseAgain 0.65
289 TestPause/serial/DeletePaused 2.63
290 TestNoKubernetes/serial/Start 5.24
291 TestPause/serial/VerifyDeletedResources 3.73
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
293 TestNoKubernetes/serial/ProfileList 4.15
294 TestNoKubernetes/serial/Stop 1.63
295 TestNoKubernetes/serial/StartNoArgs 6.35
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
297 TestNetworkPlugins/group/auto/Start 75.99
298 TestNetworkPlugins/group/kindnet/Start 40.16
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/auto/KubeletFlags 0.27
301 TestNetworkPlugins/group/auto/NetCatPod 9.24
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
304 TestNetworkPlugins/group/calico/Start 53.31
305 TestNetworkPlugins/group/auto/DNS 0.13
306 TestNetworkPlugins/group/auto/Localhost 0.43
307 TestNetworkPlugins/group/auto/HairPin 0.31
308 TestNetworkPlugins/group/kindnet/DNS 0.15
309 TestNetworkPlugins/group/kindnet/Localhost 0.12
310 TestNetworkPlugins/group/kindnet/HairPin 0.11
311 TestNetworkPlugins/group/custom-flannel/Start 61.21
312 TestNetworkPlugins/group/enable-default-cni/Start 72.43
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/calico/KubeletFlags 0.28
315 TestNetworkPlugins/group/calico/NetCatPod 10.17
316 TestNetworkPlugins/group/calico/DNS 0.13
317 TestNetworkPlugins/group/calico/Localhost 0.11
318 TestNetworkPlugins/group/calico/HairPin 0.11
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.2
321 TestNetworkPlugins/group/flannel/Start 53.92
322 TestNetworkPlugins/group/custom-flannel/DNS 0.14
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
330 TestNetworkPlugins/group/bridge/Start 70.06
332 TestStartStop/group/old-k8s-version/serial/FirstStart 51.14
333 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
335 TestNetworkPlugins/group/flannel/NetCatPod 10.19
336 TestNetworkPlugins/group/flannel/DNS 0.14
337 TestNetworkPlugins/group/flannel/Localhost 0.11
338 TestNetworkPlugins/group/flannel/HairPin 0.12
339 TestStartStop/group/old-k8s-version/serial/DeployApp 7.29
341 TestStartStop/group/embed-certs/serial/FirstStart 45.77
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
343 TestNetworkPlugins/group/bridge/NetCatPod 10.2
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
345 TestStartStop/group/old-k8s-version/serial/Stop 16.11
346 TestNetworkPlugins/group/bridge/DNS 0.15
347 TestNetworkPlugins/group/bridge/Localhost 0.11
348 TestNetworkPlugins/group/bridge/HairPin 0.11
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
350 TestStartStop/group/old-k8s-version/serial/SecondStart 49.15
352 TestStartStop/group/no-preload/serial/FirstStart 57.89
354 TestStartStop/group/newest-cni/serial/FirstStart 30.75
355 TestStartStop/group/embed-certs/serial/DeployApp 8.26
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
357 TestStartStop/group/embed-certs/serial/Stop 18.25
358 TestStartStop/group/newest-cni/serial/DeployApp 0
359 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
360 TestStartStop/group/newest-cni/serial/Stop 2.56
361 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
362 TestStartStop/group/newest-cni/serial/SecondStart 12.64
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
364 TestStartStop/group/embed-certs/serial/SecondStart 48.56
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.85
369 TestStartStop/group/newest-cni/serial/Pause 2.63
370 TestStartStop/group/no-preload/serial/DeployApp 8.31
372 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.58
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.73
374 TestStartStop/group/no-preload/serial/Stop 16.3
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
376 TestStartStop/group/no-preload/serial/SecondStart 54.21
378 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.14
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
382 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.21
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
390 TestStartStop/group/old-k8s-version/serial/Pause 2.6
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.64
392 TestStartStop/group/embed-certs/serial/Pause 2.62
393 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.64
394 TestStartStop/group/no-preload/serial/Pause 2.58
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.71
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.6
x
+
TestDownloadOnly/v1.28.0/json-events (5.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-575596 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-575596 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.478797029s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 08:29:17.870853  386225 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 08:29:17.870974  386225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-575596
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-575596: exit status 85 (68.04387ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-575596 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-575596 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:12.435255  386237 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:12.435378  386237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:12.435390  386237 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:12.435396  386237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:12.435627  386237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	W0929 08:29:12.435784  386237 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21650-382648/.minikube/config/config.json: open /home/jenkins/minikube-integration/21650-382648/.minikube/config/config.json: no such file or directory
	I0929 08:29:12.436307  386237 out.go:368] Setting JSON to true
	I0929 08:29:12.437491  386237 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7901,"bootTime":1759126651,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:12.437615  386237 start.go:140] virtualization: kvm guest
	I0929 08:29:12.440018  386237 out.go:99] [download-only-575596] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 08:29:12.440173  386237 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 08:29:12.440224  386237 notify.go:220] Checking for updates...
	I0929 08:29:12.441434  386237 out.go:171] MINIKUBE_LOCATION=21650
	I0929 08:29:12.443123  386237 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:12.444633  386237 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:12.445973  386237 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:12.447059  386237 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 08:29:12.449208  386237 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 08:29:12.449487  386237 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:12.473102  386237 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:12.473189  386237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:12.527685  386237 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 08:29:12.518052499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:12.527816  386237 docker.go:318] overlay module found
	I0929 08:29:12.529682  386237 out.go:99] Using the docker driver based on user configuration
	I0929 08:29:12.529724  386237 start.go:304] selected driver: docker
	I0929 08:29:12.529734  386237 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:12.529823  386237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:12.584404  386237 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 08:29:12.575248398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:12.584593  386237 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:12.585157  386237 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 08:29:12.585329  386237 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 08:29:12.587210  386237 out.go:171] Using Docker driver with root privileges
	I0929 08:29:12.588398  386237 cni.go:84] Creating CNI manager for ""
	I0929 08:29:12.588473  386237 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 08:29:12.588488  386237 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 08:29:12.588571  386237 start.go:348] cluster config:
	{Name:download-only-575596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-575596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:29:12.589976  386237 out.go:99] Starting "download-only-575596" primary control-plane node in "download-only-575596" cluster
	I0929 08:29:12.590009  386237 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 08:29:12.591273  386237 out.go:99] Pulling base image v0.0.48 ...
	I0929 08:29:12.591304  386237 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 08:29:12.591434  386237 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 08:29:12.609057  386237 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:12.609240  386237 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 08:29:12.609340  386237 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 08:29:12.618661  386237 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:12.618684  386237 cache.go:58] Caching tarball of preloaded images
	I0929 08:29:12.618855  386237 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 08:29:12.620795  386237 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 08:29:12.620813  386237 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 08:29:12.649227  386237 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 08:29:15.828817  386237 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 08:29:16.345879  386237 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 08:29:16.345975  386237 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 08:29:17.255812  386237 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0929 08:29:17.256198  386237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/download-only-575596/config.json ...
	I0929 08:29:17.256233  386237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/download-only-575596/config.json: {Name:mk2f7da50192ffa97f358c453694e9bced4c6b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 08:29:17.256402  386237 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 08:29:17.256578  386237 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21650-382648/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-575596 host does not exist
	  To start a cluster, run: "minikube start -p download-only-575596"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-575596
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.938424752s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I0929 08:29:23.242582  386225 preload.go:131] Checking if preload exists for k8s version v1.34.1 and runtime crio
I0929 08:29:23.242629  386225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21650-382648/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-749576
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-749576: exit status 85 (63.244191ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-575596 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-575596 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ delete  │ -p download-only-575596                                                                                                                                                   │ download-only-575596 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │ 29 Sep 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-749576 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-749576 │ jenkins │ v1.37.0 │ 29 Sep 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 08:29:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 08:29:18.346856  386583 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:29:18.347094  386583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:18.347103  386583 out.go:374] Setting ErrFile to fd 2...
	I0929 08:29:18.347107  386583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:29:18.347310  386583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:29:18.347784  386583 out.go:368] Setting JSON to true
	I0929 08:29:18.348650  386583 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7907,"bootTime":1759126651,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:29:18.348746  386583 start.go:140] virtualization: kvm guest
	I0929 08:29:18.350957  386583 out.go:99] [download-only-749576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:29:18.351138  386583 notify.go:220] Checking for updates...
	I0929 08:29:18.352597  386583 out.go:171] MINIKUBE_LOCATION=21650
	I0929 08:29:18.354003  386583 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:29:18.355303  386583 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:29:18.356817  386583 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:29:18.358088  386583 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 08:29:18.360531  386583 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 08:29:18.360788  386583 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:29:18.385017  386583 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:29:18.385103  386583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:18.443079  386583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 08:29:18.432341414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:18.443235  386583 docker.go:318] overlay module found
	I0929 08:29:18.445123  386583 out.go:99] Using the docker driver based on user configuration
	I0929 08:29:18.445172  386583 start.go:304] selected driver: docker
	I0929 08:29:18.445181  386583 start.go:924] validating driver "docker" against <nil>
	I0929 08:29:18.445297  386583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:29:18.501098  386583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 08:29:18.491556996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:29:18.501261  386583 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 08:29:18.501740  386583 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 08:29:18.501912  386583 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 08:29:18.503891  386583 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-749576 host does not exist
	  To start a cluster, run: "minikube start -p download-only-749576"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-749576
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-084266 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-084266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-084266
--- PASS: TestDownloadOnlyKic (1.20s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 08:29:25.140455  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-867285 --alsologtostderr --binary-mirror http://127.0.0.1:34813 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-867285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-867285
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (86.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-138244 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-138244 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m23.672528351s)
helpers_test.go:175: Cleaning up "offline-crio-138244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-138244
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-138244: (2.490452438s)
--- PASS: TestOffline (86.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051783
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-051783: exit status 85 (56.704787ms)

                                                
                                                
-- stdout --
	* Profile "addons-051783" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051783"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051783
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-051783: exit status 85 (57.613999ms)

                                                
                                                
-- stdout --
	* Profile "addons-051783" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051783"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (409.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-051783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m49.117475916s)
--- PASS: TestAddons/Setup (409.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-051783 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-051783 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (29.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-051783 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-051783 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [678c7ce1-1a49-4925-bedd-f3e6fd848f03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [678c7ce1-1a49-4925-bedd-f3e6fd848f03] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 29.004443893s
addons_test.go:694: (dbg) Run:  kubectl --context addons-051783 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-051783 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-051783 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (29.49s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.774759ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-051783
addons_test.go:332: (dbg) Run:  kubectl --context addons-051783 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-p475s" [b5ab8084-2d3e-4fc7-a533-b7c9d4f5a8e7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003468873s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.986372ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mtm5q" [da6ff12e-68a8-4e8a-94bc-8353284efb30] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002515107s
addons_test.go:463: (dbg) Run:  kubectl --context addons-051783 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (32.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-051783 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-9nxzs" [a4575d99-9a3b-4cda-9731-fdad2c9fed6d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-9nxzs" [a4575d99-9a3b-4cda-9731-fdad2c9fed6d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 26.004059609s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-051783 addons disable headlamp --alsologtostderr -v=1: (5.662215426s)
--- PASS: TestAddons/parallel/Headlamp (32.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-8dpkv" [030595fc-6ad4-46dc-8cba-72172c94d775] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003745072s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-69rbf" [e910f08f-8a1b-4329-966b-6b4b4d67677e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003953682s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-051783
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-051783: (18.253971579s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051783
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051783
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-051783
--- PASS: TestAddons/StoppedEnableDisable (18.51s)

                                                
                                    
x
+
TestCertOptions (34.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-491832 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-491832 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.062476112s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-491832 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-491832 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-491832 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-491832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-491832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-491832: (2.396634667s)
--- PASS: TestCertOptions (34.09s)

                                                
                                    
x
+
TestCertExpiration (220.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-389992 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-389992 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.313192374s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-389992 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-389992 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.484265397s)
helpers_test.go:175: Cleaning up "cert-expiration-389992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-389992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-389992: (2.996256834s)
--- PASS: TestCertExpiration (220.80s)

                                                
                                    
x
+
TestForceSystemdFlag (38.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-184956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-184956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.390598534s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-184956 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-184956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-184956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-184956: (2.457008622s)
--- PASS: TestForceSystemdFlag (38.20s)

                                                
                                    
x
+
TestForceSystemdEnv (31.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-929795 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-929795 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.140410125s)
helpers_test.go:175: Cleaning up "force-systemd-env-929795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-929795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-929795: (4.627901764s)
--- PASS: TestForceSystemdEnv (31.77s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 09:28:42.064965  386225 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 09:28:42.065121  386225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3226216898/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 09:28:42.096613  386225 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3226216898/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 09:28:42.096657  386225 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 09:28:42.096809  386225 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 09:28:42.096891  386225 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3226216898/001/docker-machine-driver-kvm2
I0929 09:28:42.963895  386225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3226216898/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 09:28:42.986045  386225 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3226216898/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.05s)

                                                
                                    
x
+
TestErrorSpam/setup (21.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-031994 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-031994 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-031994 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-031994 --driver=docker  --container-runtime=crio: (21.628036412s)
--- PASS: TestErrorSpam/setup (21.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (8.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 stop: (7.856466953s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-031994 --log_dir /tmp/nospam-031994 stop
--- PASS: TestErrorSpam/stop (8.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21650-382648/.minikube/files/etc/test/nested/copy/386225/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-580781 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.944337365s)
--- PASS: TestFunctional/serial/StartWithProxy (42.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 08:49:11.000017  386225 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-580781 --alsologtostderr -v=8: (7.459611347s)
functional_test.go:678: soft start took 7.460559618s for "functional-580781" cluster.
I0929 08:49:18.460352  386225 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-580781 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 cache add registry.k8s.io/pause:3.3: (1.023743841s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 cache add registry.k8s.io/pause:latest: (1.017398449s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-580781 /tmp/TestFunctionalserialCacheCmdcacheadd_local3346436084/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache add minikube-local-cache-test:functional-580781
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 cache add minikube-local-cache-test:functional-580781: (1.014209115s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache delete minikube-local-cache-test:functional-580781
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-580781
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.381057ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 kubectl -- --context functional-580781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-580781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-580781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.063779796s)
functional_test.go:776: restart took 47.063897885s for "functional-580781" cluster.
I0929 08:50:12.509479  386225 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-580781 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs: (1.453512723s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 logs --file /tmp/TestFunctionalserialLogsFileCmd517214884/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 logs --file /tmp/TestFunctionalserialLogsFileCmd517214884/001/logs.txt: (1.463132417s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-580781 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-580781
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-580781: exit status 115 (350.586711ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32157 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-580781 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 config get cpus: exit status 14 (71.549054ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 config get cpus: exit status 14 (69.583426ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (151.511835ms)

                                                
                                                
-- stdout --
	* [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 08:56:29.966695  444184 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:56:29.966849  444184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:29.966862  444184 out.go:374] Setting ErrFile to fd 2...
	I0929 08:56:29.966868  444184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:29.967053  444184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:56:29.967506  444184 out.go:368] Setting JSON to false
	I0929 08:56:29.968488  444184 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9539,"bootTime":1759126651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:56:29.968595  444184 start.go:140] virtualization: kvm guest
	I0929 08:56:29.970450  444184 out.go:179] * [functional-580781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 08:56:29.971646  444184 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:56:29.971660  444184 notify.go:220] Checking for updates...
	I0929 08:56:29.974121  444184 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:56:29.975250  444184 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:56:29.976419  444184 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:56:29.977667  444184 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:56:29.978842  444184 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:56:29.980553  444184 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:56:29.981084  444184 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:56:30.006376  444184 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:56:30.006496  444184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:30.060713  444184 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:30.049630826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:30.060848  444184 docker.go:318] overlay module found
	I0929 08:56:30.062721  444184 out.go:179] * Using the docker driver based on existing profile
	I0929 08:56:30.064019  444184 start.go:304] selected driver: docker
	I0929 08:56:30.064038  444184 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:30.064163  444184 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:56:30.065983  444184 out.go:203] 
	W0929 08:56:30.067234  444184 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 08:56:30.068422  444184 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-580781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.83612ms)

                                                
                                                
-- stdout --
	* [functional-580781] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 08:56:29.811737  444103 out.go:360] Setting OutFile to fd 1 ...
	I0929 08:56:29.811849  444103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:29.811859  444103 out.go:374] Setting ErrFile to fd 2...
	I0929 08:56:29.811863  444103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 08:56:29.812170  444103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 08:56:29.812644  444103 out.go:368] Setting JSON to false
	I0929 08:56:29.813627  444103 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9539,"bootTime":1759126651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 08:56:29.813742  444103 start.go:140] virtualization: kvm guest
	I0929 08:56:29.816544  444103 out.go:179] * [functional-580781] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 08:56:29.818102  444103 notify.go:220] Checking for updates...
	I0929 08:56:29.818131  444103 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 08:56:29.819556  444103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 08:56:29.820861  444103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 08:56:29.822291  444103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 08:56:29.823470  444103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 08:56:29.824674  444103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 08:56:29.826387  444103 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 08:56:29.827036  444103 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 08:56:29.850550  444103 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 08:56:29.850645  444103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 08:56:29.907927  444103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-29 08:56:29.896376483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 08:56:29.908073  444103 docker.go:318] overlay module found
	I0929 08:56:29.910156  444103 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 08:56:29.911544  444103 start.go:304] selected driver: docker
	I0929 08:56:29.911565  444103 start.go:924] validating driver "docker" against &{Name:functional-580781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-580781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 08:56:29.911654  444103 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 08:56:29.913621  444103 out.go:203] 
	W0929 08:56:29.915000  444103 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 08:56:29.916381  444103 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh -n functional-580781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cp functional-580781:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3079732188/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh -n functional-580781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh -n functional-580781 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/386225/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /etc/test/nested/copy/386225/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/386225.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /etc/ssl/certs/386225.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/386225.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /usr/share/ca-certificates/386225.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3862252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /etc/ssl/certs/3862252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3862252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /usr/share/ca-certificates/3862252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-580781 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "sudo systemctl is-active docker": exit status 1 (297.109931ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "sudo systemctl is-active containerd": exit status 1 (273.611154ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 436962: os: process already finished
helpers_test.go:525: unable to kill pid 436623: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580781 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-580781
localhost/kicbase/echo-server:functional-580781
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580781 image ls --format short --alsologtostderr:
I0929 09:00:23.513918  446999 out.go:360] Setting OutFile to fd 1 ...
I0929 09:00:23.514166  446999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:23.514174  446999 out.go:374] Setting ErrFile to fd 2...
I0929 09:00:23.514178  446999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:23.514389  446999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 09:00:23.515006  446999 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:23.515094  446999 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:23.515505  446999 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 09:00:23.533733  446999 ssh_runner.go:195] Run: systemctl --version
I0929 09:00:23.533784  446999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 09:00:23.551671  446999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 09:00:23.644907  446999 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580781 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/my-image                      │ functional-580781  │ e6673fa8dd059 │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/kicbase/echo-server           │ functional-580781  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-580781  │ c8eca41bc47ed │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580781 image ls --format table --alsologtostderr:
I0929 09:00:26.227612  447552 out.go:360] Setting OutFile to fd 1 ...
I0929 09:00:26.227723  447552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:26.227732  447552 out.go:374] Setting ErrFile to fd 2...
I0929 09:00:26.227736  447552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:26.227969  447552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 09:00:26.228604  447552 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:26.228692  447552 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:26.229079  447552 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 09:00:26.246752  447552 ssh_runner.go:195] Run: systemctl --version
I0929 09:00:26.246794  447552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 09:00:26.263958  447552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 09:00:26.356616  447552 ssh_runner.go:195] Run: sudo crictl images --output json
E0929 09:01:15.700115  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580781 image ls --format json --alsologtostderr:
[{"id":"7fcc996c5f7bd0f0db5afa442615b9f4f65be4e190e5952e9eedd6cd00a1cc05","repoDigests":["docker.io/library/5e4998621a54e5f15df19e5f6a8d033ad8983876895e257cdc51df431cafa3af-tmp@sha256:01f0b6361de72a8ac798a5d4bd7872bdfd78be32cd5cf9f2b259cf5786116270"],"repoTags":[],"size":"1465612"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e6673fa8dd059f195f73986779f296aefe12f4db68210e78acfd27d14fdd9b30","repoDigests":["localhost/my-image@sha256:606047fba735ba571d6913787e3e863840acf9fb254ed60d51cecbb2d90dc282"],"repoTags":["localhost/my-image:functional-580781"],"size":"1468194"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":
["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c8eca41bc47eddebb2141ba774a212296a1bdae215b7641a9c0b4dc1be657efb","repoDigests":["localhost/minikube-local-cache-test@sha256:d3fcdec07c00da36b3cad04f83a2a17c509121e3e9abc6a13f49c806906aee72"],"repoTags":["localhost/minikube-local-cache-test:functional-580781"],"size":"3330"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99
250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-580781"],"size":"4943877"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360
bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["reg
istry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580781 image ls --format json --alsologtostderr:
I0929 09:00:26.015285  447503 out.go:360] Setting OutFile to fd 1 ...
I0929 09:00:26.015564  447503 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:26.015575  447503 out.go:374] Setting ErrFile to fd 2...
I0929 09:00:26.015581  447503 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:26.015795  447503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 09:00:26.016405  447503 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:26.016517  447503 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:26.016961  447503 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 09:00:26.034899  447503 ssh_runner.go:195] Run: systemctl --version
I0929 09:00:26.034945  447503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 09:00:26.052242  447503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 09:00:26.143849  447503 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580781 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-580781
size: "4943877"
- id: c8eca41bc47eddebb2141ba774a212296a1bdae215b7641a9c0b4dc1be657efb
repoDigests:
- localhost/minikube-local-cache-test@sha256:d3fcdec07c00da36b3cad04f83a2a17c509121e3e9abc6a13f49c806906aee72
repoTags:
- localhost/minikube-local-cache-test:functional-580781
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580781 image ls --format yaml --alsologtostderr:
I0929 09:00:23.730947  447048 out.go:360] Setting OutFile to fd 1 ...
I0929 09:00:23.731237  447048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:23.731249  447048 out.go:374] Setting ErrFile to fd 2...
I0929 09:00:23.731253  447048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:23.731428  447048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 09:00:23.732022  447048 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:23.732111  447048 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:23.732496  447048 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 09:00:23.749687  447048 ssh_runner.go:195] Run: systemctl --version
I0929 09:00:23.749727  447048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 09:00:23.767194  447048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 09:00:23.859795  447048 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh pgrep buildkitd: exit status 1 (251.983193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr: (1.599612589s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7fcc996c5f7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-580781
--> e6673fa8dd0
Successfully tagged localhost/my-image:functional-580781
e6673fa8dd059f195f73986779f296aefe12f4db68210e78acfd27d14fdd9b30
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580781 image build -t localhost/my-image:functional-580781 testdata/build --alsologtostderr:
I0929 09:00:24.197941  447198 out.go:360] Setting OutFile to fd 1 ...
I0929 09:00:24.198182  447198 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:24.198190  447198 out.go:374] Setting ErrFile to fd 2...
I0929 09:00:24.198194  447198 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 09:00:24.198394  447198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
I0929 09:00:24.198989  447198 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:24.199692  447198 config.go:182] Loaded profile config "functional-580781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I0929 09:00:24.200151  447198 cli_runner.go:164] Run: docker container inspect functional-580781 --format={{.State.Status}}
I0929 09:00:24.218072  447198 ssh_runner.go:195] Run: systemctl --version
I0929 09:00:24.218133  447198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580781
I0929 09:00:24.235269  447198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/functional-580781/id_rsa Username:docker}
I0929 09:00:24.326647  447198 build_images.go:161] Building image from path: /tmp/build.1285072895.tar
I0929 09:00:24.326718  447198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 09:00:24.336382  447198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1285072895.tar
I0929 09:00:24.339809  447198 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1285072895.tar: stat -c "%s %y" /var/lib/minikube/build/build.1285072895.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1285072895.tar': No such file or directory
I0929 09:00:24.339858  447198 ssh_runner.go:362] scp /tmp/build.1285072895.tar --> /var/lib/minikube/build/build.1285072895.tar (3072 bytes)
I0929 09:00:24.364428  447198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1285072895
I0929 09:00:24.373271  447198 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1285072895 -xf /var/lib/minikube/build/build.1285072895.tar
I0929 09:00:24.382180  447198 crio.go:315] Building image: /var/lib/minikube/build/build.1285072895
I0929 09:00:24.382241  447198 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-580781 /var/lib/minikube/build/build.1285072895 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 09:00:25.729504  447198 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-580781 /var/lib/minikube/build/build.1285072895 --cgroup-manager=cgroupfs: (1.347229819s)
I0929 09:00:25.729575  447198 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1285072895
I0929 09:00:25.738812  447198 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1285072895.tar
I0929 09:00:25.747690  447198 build_images.go:217] Built localhost/my-image:functional-580781 from /tmp/build.1285072895.tar
I0929 09:00:25.747717  447198 build_images.go:133] succeeded building to: functional-580781
I0929 09:00:25.747721  447198 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-580781
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image load --daemon kicbase/echo-server:functional-580781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image load --daemon kicbase/echo-server:functional-580781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-580781
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image load --daemon kicbase/echo-server:functional-580781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image save kicbase/echo-server:functional-580781 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image rm kicbase/echo-server:functional-580781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-580781
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 image save --daemon kicbase/echo-server:functional-580781 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-580781
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (118.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdany-port2091007709/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759135828609145608" to /tmp/TestFunctionalparallelMountCmdany-port2091007709/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759135828609145608" to /tmp/TestFunctionalparallelMountCmdany-port2091007709/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759135828609145608" to /tmp/TestFunctionalparallelMountCmdany-port2091007709/001/test-1759135828609145608
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.486609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 08:50:28.870912  386225 retry.go:31] will retry after 264.597443ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 08:50 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 08:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 08:50 test-1759135828609145608
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh cat /mount-9p/test-1759135828609145608
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-580781 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5bd849e3-1d43-4460-85b9-7c9cdd1f19db] Pending
helpers_test.go:352: "busybox-mount" [5bd849e3-1d43-4460-85b9-7c9cdd1f19db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0929 08:51:15.700220  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:15.706674  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:15.718011  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:15.739423  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:15.780919  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:15.862392  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:16.024138  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:16.346438  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:16.987979  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:18.269436  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:20.831304  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:25.952651  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:36.194733  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 08:51:56.676953  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [5bd849e3-1d43-4460-85b9-7c9cdd1f19db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5bd849e3-1d43-4460-85b9-7c9cdd1f19db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m56.002953013s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-580781 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdany-port2091007709/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (118.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdspecific-port2497644652/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.589796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 08:52:27.286698  386225 retry.go:31] will retry after 505.166906ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdspecific-port2497644652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "sudo umount -f /mount-9p": exit status 1 (258.63597ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-580781 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdspecific-port2497644652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T" /mount1: exit status 1 (310.53699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 08:52:29.097194  386225 retry.go:31] will retry after 704.974587ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-580781 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup269835918/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-580781 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "323.107082ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.027746ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "325.566192ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "51.327017ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 service list: (1.686890545s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-580781 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-580781 service list -o json: (1.68794902s)
functional_test.go:1504: Took "1.688061318s" to run "out/minikube-linux-amd64 -p functional-580781 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-580781
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-580781
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-580781
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (151.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 09:06:15.699897  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:07:38.766046  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m31.291675711s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (151.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 kubectl -- rollout status deployment/busybox: (2.150120385s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-7qxjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-sqd26 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-zpcfv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-7qxjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-sqd26 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-zpcfv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-7qxjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-sqd26 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-zpcfv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-7qxjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-7qxjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-sqd26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-sqd26 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-zpcfv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 kubectl -- exec busybox-7b57f96db7-zpcfv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 node add --alsologtostderr -v 5: (23.58155642s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-061106 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp testdata/cp-test.txt ha-061106:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3855489211/001/cp-test_ha-061106.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106:/home/docker/cp-test.txt ha-061106-m02:/home/docker/cp-test_ha-061106_ha-061106-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test_ha-061106_ha-061106-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106:/home/docker/cp-test.txt ha-061106-m03:/home/docker/cp-test_ha-061106_ha-061106-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test_ha-061106_ha-061106-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106:/home/docker/cp-test.txt ha-061106-m04:/home/docker/cp-test_ha-061106_ha-061106-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test_ha-061106_ha-061106-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp testdata/cp-test.txt ha-061106-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3855489211/001/cp-test_ha-061106-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m02:/home/docker/cp-test.txt ha-061106:/home/docker/cp-test_ha-061106-m02_ha-061106.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test_ha-061106-m02_ha-061106.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m02:/home/docker/cp-test.txt ha-061106-m03:/home/docker/cp-test_ha-061106-m02_ha-061106-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test_ha-061106-m02_ha-061106-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m02:/home/docker/cp-test.txt ha-061106-m04:/home/docker/cp-test_ha-061106-m02_ha-061106-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test_ha-061106-m02_ha-061106-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp testdata/cp-test.txt ha-061106-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3855489211/001/cp-test_ha-061106-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m03:/home/docker/cp-test.txt ha-061106:/home/docker/cp-test_ha-061106-m03_ha-061106.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test_ha-061106-m03_ha-061106.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m03:/home/docker/cp-test.txt ha-061106-m02:/home/docker/cp-test_ha-061106-m03_ha-061106-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test_ha-061106-m03_ha-061106-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m03:/home/docker/cp-test.txt ha-061106-m04:/home/docker/cp-test_ha-061106-m03_ha-061106-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test_ha-061106-m03_ha-061106-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp testdata/cp-test.txt ha-061106-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3855489211/001/cp-test_ha-061106-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m04:/home/docker/cp-test.txt ha-061106:/home/docker/cp-test_ha-061106-m04_ha-061106.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106 "sudo cat /home/docker/cp-test_ha-061106-m04_ha-061106.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m04:/home/docker/cp-test.txt ha-061106-m02:/home/docker/cp-test_ha-061106-m04_ha-061106-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m02 "sudo cat /home/docker/cp-test_ha-061106-m04_ha-061106-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 cp ha-061106-m04:/home/docker/cp-test.txt ha-061106-m03:/home/docker/cp-test_ha-061106-m04_ha-061106-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 ssh -n ha-061106-m03 "sudo cat /home/docker/cp-test_ha-061106-m04_ha-061106-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 node stop m02 --alsologtostderr -v 5: (18.0990917s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5: exit status 7 (680.454265ms)

                                                
                                                
-- stdout --
	ha-061106
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-061106-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-061106-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-061106-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 09:09:40.984323  472460 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:09:40.984624  472460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:09:40.984636  472460 out.go:374] Setting ErrFile to fd 2...
	I0929 09:09:40.984640  472460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:09:40.984816  472460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:09:40.984996  472460 out.go:368] Setting JSON to false
	I0929 09:09:40.985026  472460 mustload.go:65] Loading cluster: ha-061106
	I0929 09:09:40.985148  472460 notify.go:220] Checking for updates...
	I0929 09:09:40.985386  472460 config.go:182] Loaded profile config "ha-061106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:09:40.985406  472460 status.go:174] checking status of ha-061106 ...
	I0929 09:09:40.985815  472460 cli_runner.go:164] Run: docker container inspect ha-061106 --format={{.State.Status}}
	I0929 09:09:41.004540  472460 status.go:371] ha-061106 host status = "Running" (err=<nil>)
	I0929 09:09:41.004566  472460 host.go:66] Checking if "ha-061106" exists ...
	I0929 09:09:41.004921  472460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-061106
	I0929 09:09:41.023399  472460 host.go:66] Checking if "ha-061106" exists ...
	I0929 09:09:41.023672  472460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:09:41.023712  472460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-061106
	I0929 09:09:41.043209  472460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/ha-061106/id_rsa Username:docker}
	I0929 09:09:41.137608  472460 ssh_runner.go:195] Run: systemctl --version
	I0929 09:09:41.141975  472460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:09:41.154148  472460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:09:41.211606  472460 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 09:09:41.200682574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:09:41.212244  472460 kubeconfig.go:125] found "ha-061106" server: "https://192.168.49.254:8443"
	I0929 09:09:41.212285  472460 api_server.go:166] Checking apiserver status ...
	I0929 09:09:41.212328  472460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:09:41.224859  472460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	W0929 09:09:41.234890  472460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:09:41.234964  472460 ssh_runner.go:195] Run: ls
	I0929 09:09:41.238822  472460 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 09:09:41.244431  472460 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 09:09:41.244457  472460 status.go:463] ha-061106 apiserver status = Running (err=<nil>)
	I0929 09:09:41.244467  472460 status.go:176] ha-061106 status: &{Name:ha-061106 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:09:41.244482  472460 status.go:174] checking status of ha-061106-m02 ...
	I0929 09:09:41.244712  472460 cli_runner.go:164] Run: docker container inspect ha-061106-m02 --format={{.State.Status}}
	I0929 09:09:41.262732  472460 status.go:371] ha-061106-m02 host status = "Stopped" (err=<nil>)
	I0929 09:09:41.262752  472460 status.go:384] host is not running, skipping remaining checks
	I0929 09:09:41.262759  472460 status.go:176] ha-061106-m02 status: &{Name:ha-061106-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:09:41.262777  472460 status.go:174] checking status of ha-061106-m03 ...
	I0929 09:09:41.263048  472460 cli_runner.go:164] Run: docker container inspect ha-061106-m03 --format={{.State.Status}}
	I0929 09:09:41.280949  472460 status.go:371] ha-061106-m03 host status = "Running" (err=<nil>)
	I0929 09:09:41.280979  472460 host.go:66] Checking if "ha-061106-m03" exists ...
	I0929 09:09:41.281256  472460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-061106-m03
	I0929 09:09:41.299086  472460 host.go:66] Checking if "ha-061106-m03" exists ...
	I0929 09:09:41.299380  472460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:09:41.299421  472460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-061106-m03
	I0929 09:09:41.316894  472460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/ha-061106-m03/id_rsa Username:docker}
	I0929 09:09:41.410703  472460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:09:41.422925  472460 kubeconfig.go:125] found "ha-061106" server: "https://192.168.49.254:8443"
	I0929 09:09:41.422953  472460 api_server.go:166] Checking apiserver status ...
	I0929 09:09:41.422994  472460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:09:41.434770  472460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1368/cgroup
	W0929 09:09:41.444578  472460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1368/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:09:41.444623  472460 ssh_runner.go:195] Run: ls
	I0929 09:09:41.448207  472460 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 09:09:41.452334  472460 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 09:09:41.452366  472460 status.go:463] ha-061106-m03 apiserver status = Running (err=<nil>)
	I0929 09:09:41.452383  472460 status.go:176] ha-061106-m03 status: &{Name:ha-061106-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:09:41.452405  472460 status.go:174] checking status of ha-061106-m04 ...
	I0929 09:09:41.452689  472460 cli_runner.go:164] Run: docker container inspect ha-061106-m04 --format={{.State.Status}}
	I0929 09:09:41.472004  472460 status.go:371] ha-061106-m04 host status = "Running" (err=<nil>)
	I0929 09:09:41.472027  472460 host.go:66] Checking if "ha-061106-m04" exists ...
	I0929 09:09:41.472304  472460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-061106-m04
	I0929 09:09:41.491715  472460 host.go:66] Checking if "ha-061106-m04" exists ...
	I0929 09:09:41.492057  472460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:09:41.492121  472460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-061106-m04
	I0929 09:09:41.510437  472460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/ha-061106-m04/id_rsa Username:docker}
	I0929 09:09:41.604107  472460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:09:41.616293  472460 status.go:176] ha-061106-m04 status: &{Name:ha-061106-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 node start m02 --alsologtostderr -v 5: (8.285336163s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 stop --alsologtostderr -v 5
E0929 09:10:19.892784  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:19.899204  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:19.910620  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:19.932024  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:19.973465  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:20.055453  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:20.217334  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:20.538745  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:21.180788  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:22.462380  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:25.025292  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:10:30.147297  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 stop --alsologtostderr -v 5: (44.530912123s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 start --wait true --alsologtostderr -v 5
E0929 09:10:40.389525  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:11:00.870998  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:11:15.700062  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:11:41.833041  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 start --wait true --alsologtostderr -v 5: (1m9.976756387s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 node delete m03 --alsologtostderr -v 5: (10.492347842s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 stop --alsologtostderr -v 5: (42.168708972s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5: exit status 7 (106.371268ms)

                                                
                                                
-- stdout --
	ha-061106
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-061106-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-061106-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 09:12:41.171748  489091 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:12:41.172031  489091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:12:41.172043  489091 out.go:374] Setting ErrFile to fd 2...
	I0929 09:12:41.172047  489091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:12:41.172246  489091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:12:41.172738  489091 out.go:368] Setting JSON to false
	I0929 09:12:41.172784  489091 mustload.go:65] Loading cluster: ha-061106
	I0929 09:12:41.173410  489091 notify.go:220] Checking for updates...
	I0929 09:12:41.174035  489091 config.go:182] Loaded profile config "ha-061106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:12:41.174064  489091 status.go:174] checking status of ha-061106 ...
	I0929 09:12:41.174568  489091 cli_runner.go:164] Run: docker container inspect ha-061106 --format={{.State.Status}}
	I0929 09:12:41.193698  489091 status.go:371] ha-061106 host status = "Stopped" (err=<nil>)
	I0929 09:12:41.193724  489091 status.go:384] host is not running, skipping remaining checks
	I0929 09:12:41.193733  489091 status.go:176] ha-061106 status: &{Name:ha-061106 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:12:41.193787  489091 status.go:174] checking status of ha-061106-m02 ...
	I0929 09:12:41.194172  489091 cli_runner.go:164] Run: docker container inspect ha-061106-m02 --format={{.State.Status}}
	I0929 09:12:41.213709  489091 status.go:371] ha-061106-m02 host status = "Stopped" (err=<nil>)
	I0929 09:12:41.213732  489091 status.go:384] host is not running, skipping remaining checks
	I0929 09:12:41.213738  489091 status.go:176] ha-061106-m02 status: &{Name:ha-061106-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:12:41.213762  489091 status.go:174] checking status of ha-061106-m04 ...
	I0929 09:12:41.214061  489091 cli_runner.go:164] Run: docker container inspect ha-061106-m04 --format={{.State.Status}}
	I0929 09:12:41.230859  489091 status.go:371] ha-061106-m04 host status = "Stopped" (err=<nil>)
	I0929 09:12:41.230881  489091 status.go:384] host is not running, skipping remaining checks
	I0929 09:12:41.230888  489091 status.go:176] ha-061106-m04 status: &{Name:ha-061106-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 09:13:03.754665  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.933641377s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-061106 node add --control-plane --alsologtostderr -v 5: (34.7222517s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-061106 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-539187 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 09:15:19.895045  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-539187 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.701760015s)
--- PASS: TestJSONOutput/start/Command (69.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-539187 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-539187 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-539187 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-539187 --output=json --user=testUser: (7.9370518s)
--- PASS: TestJSONOutput/stop/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-642418 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-642418 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (68.458407ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b9ff7a9-4d3c-41a8-90f5-32eac44b6964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-642418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f9163be-e890-420b-bd0b-5825edb0f05e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21650"}}
	{"specversion":"1.0","id":"ca00c299-b870-4008-86c1-c56d209018b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f75b2b3-6e22-41f4-be0d-0a3df6d26f0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig"}}
	{"specversion":"1.0","id":"68e993e4-871e-4bae-ad2a-a17f9eab7d15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube"}}
	{"specversion":"1.0","id":"47dfa948-d2e3-491b-8f47-b2a2323b11b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"304a414b-50be-4927-959e-66791b9ad653","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9db45ce9-aa1a-4f18-bb32-6078c8c0d2de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-642418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-642418
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-503991 --network=
E0929 09:15:47.597035  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-503991 --network=: (26.970466915s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-503991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-503991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-503991: (2.082470176s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-787735 --network=bridge
E0929 09:16:15.701955  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-787735 --network=bridge: (23.328322577s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-787735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-787735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-787735: (1.933408308s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.28s)

                                                
                                    
x
+
TestKicExistingNetwork (25.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 09:16:37.621871  386225 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 09:16:37.639570  386225 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 09:16:37.639640  386225 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 09:16:37.639657  386225 cli_runner.go:164] Run: docker network inspect existing-network
W0929 09:16:37.656263  386225 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 09:16:37.656298  386225 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 09:16:37.656316  386225 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 09:16:37.656490  386225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 09:16:37.673144  386225 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eecaf97bfbe2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:97:95:7a:cc:dd} reservation:<nil>}
I0929 09:16:37.673586  386225 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001880e50}
I0929 09:16:37.673628  386225 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 09:16:37.673681  386225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 09:16:37.730561  386225 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-583143 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-583143 --network=existing-network: (23.013349176s)
helpers_test.go:175: Cleaning up "existing-network-583143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-583143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-583143: (1.944744427s)
I0929 09:17:02.706918  386225 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.10s)

                                                
                                    
x
+
TestKicCustomSubnet (25.13s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-929304 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-929304 --subnet=192.168.60.0/24: (23.030265167s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-929304 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-929304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-929304
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-929304: (2.084086492s)
--- PASS: TestKicCustomSubnet (25.13s)

                                                
                                    
x
+
TestKicStaticIP (24.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-191728 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-191728 --static-ip=192.168.200.200: (22.483588743s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-191728 ip
helpers_test.go:175: Cleaning up "static-ip-191728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-191728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-191728: (2.07937132s)
--- PASS: TestKicStaticIP (24.70s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-143594 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-143594 --driver=docker  --container-runtime=crio: (21.278767905s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-155193 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-155193 --driver=docker  --container-runtime=crio: (21.812791609s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-143594
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-155193
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-155193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-155193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-155193: (2.268576364s)
helpers_test.go:175: Cleaning up "first-143594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-143594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-143594: (2.311806662s)
--- PASS: TestMinikubeProfile (48.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-532592 --memory=3072 --mount-string /tmp/TestMountStartserial1421819675/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-532592 --memory=3072 --mount-string /tmp/TestMountStartserial1421819675/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.401674157s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-532592 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-547919 --memory=3072 --mount-string /tmp/TestMountStartserial1421819675/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-547919 --memory=3072 --mount-string /tmp/TestMountStartserial1421819675/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.094182993s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-547919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-532592 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-532592 --alsologtostderr -v=5: (1.669355013s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-547919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-547919
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-547919: (1.184980706s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-547919
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-547919: (6.601239554s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-547919 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-473767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0929 09:20:19.892308  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-473767 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.534339364s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-473767 -- rollout status deployment/busybox: (2.109960931s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-mpzcj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-nvl7q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-mpzcj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-nvl7q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-mpzcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-nvl7q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-mpzcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-mpzcj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-nvl7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-473767 -- exec busybox-7b57f96db7-nvl7q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-473767 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-473767 -v=5 --alsologtostderr: (23.719165936s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-473767 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp testdata/cp-test.txt multinode-473767:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4043309069/001/cp-test_multinode-473767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767:/home/docker/cp-test.txt multinode-473767-m02:/home/docker/cp-test_multinode-473767_multinode-473767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test_multinode-473767_multinode-473767-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767:/home/docker/cp-test.txt multinode-473767-m03:/home/docker/cp-test_multinode-473767_multinode-473767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test_multinode-473767_multinode-473767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp testdata/cp-test.txt multinode-473767-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4043309069/001/cp-test_multinode-473767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m02:/home/docker/cp-test.txt multinode-473767:/home/docker/cp-test_multinode-473767-m02_multinode-473767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test.txt"
E0929 09:21:15.700521  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test_multinode-473767-m02_multinode-473767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m02:/home/docker/cp-test.txt multinode-473767-m03:/home/docker/cp-test_multinode-473767-m02_multinode-473767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test_multinode-473767-m02_multinode-473767-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp testdata/cp-test.txt multinode-473767-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4043309069/001/cp-test_multinode-473767-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m03:/home/docker/cp-test.txt multinode-473767:/home/docker/cp-test_multinode-473767-m03_multinode-473767.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767 "sudo cat /home/docker/cp-test_multinode-473767-m03_multinode-473767.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 cp multinode-473767-m03:/home/docker/cp-test.txt multinode-473767-m02:/home/docker/cp-test_multinode-473767-m03_multinode-473767-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 ssh -n multinode-473767-m02 "sudo cat /home/docker/cp-test_multinode-473767-m03_multinode-473767-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-473767 node stop m03: (1.297511556s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-473767 status: exit status 7 (475.532177ms)

                                                
                                                
-- stdout --
	multinode-473767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-473767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-473767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr: exit status 7 (472.363533ms)

                                                
                                                
-- stdout --
	multinode-473767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-473767-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-473767-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 09:21:21.810716  552911 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:21:21.810810  552911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:21:21.810814  552911 out.go:374] Setting ErrFile to fd 2...
	I0929 09:21:21.810818  552911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:21:21.811086  552911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:21:21.811273  552911 out.go:368] Setting JSON to false
	I0929 09:21:21.811312  552911 mustload.go:65] Loading cluster: multinode-473767
	I0929 09:21:21.811421  552911 notify.go:220] Checking for updates...
	I0929 09:21:21.811753  552911 config.go:182] Loaded profile config "multinode-473767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:21:21.811780  552911 status.go:174] checking status of multinode-473767 ...
	I0929 09:21:21.812364  552911 cli_runner.go:164] Run: docker container inspect multinode-473767 --format={{.State.Status}}
	I0929 09:21:21.832059  552911 status.go:371] multinode-473767 host status = "Running" (err=<nil>)
	I0929 09:21:21.832082  552911 host.go:66] Checking if "multinode-473767" exists ...
	I0929 09:21:21.832308  552911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-473767
	I0929 09:21:21.850200  552911 host.go:66] Checking if "multinode-473767" exists ...
	I0929 09:21:21.850445  552911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:21:21.850504  552911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-473767
	I0929 09:21:21.867672  552911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/multinode-473767/id_rsa Username:docker}
	I0929 09:21:21.961363  552911 ssh_runner.go:195] Run: systemctl --version
	I0929 09:21:21.965807  552911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:21:21.977094  552911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:21:22.029656  552911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 09:21:22.01949748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:21:22.030238  552911 kubeconfig.go:125] found "multinode-473767" server: "https://192.168.67.2:8443"
	I0929 09:21:22.030274  552911 api_server.go:166] Checking apiserver status ...
	I0929 09:21:22.030311  552911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 09:21:22.041728  552911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1473/cgroup
	W0929 09:21:22.051366  552911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1473/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 09:21:22.051420  552911 ssh_runner.go:195] Run: ls
	I0929 09:21:22.054940  552911 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 09:21:22.058927  552911 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 09:21:22.058948  552911 status.go:463] multinode-473767 apiserver status = Running (err=<nil>)
	I0929 09:21:22.058958  552911 status.go:176] multinode-473767 status: &{Name:multinode-473767 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:21:22.058973  552911 status.go:174] checking status of multinode-473767-m02 ...
	I0929 09:21:22.059200  552911 cli_runner.go:164] Run: docker container inspect multinode-473767-m02 --format={{.State.Status}}
	I0929 09:21:22.077085  552911 status.go:371] multinode-473767-m02 host status = "Running" (err=<nil>)
	I0929 09:21:22.077105  552911 host.go:66] Checking if "multinode-473767-m02" exists ...
	I0929 09:21:22.077360  552911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-473767-m02
	I0929 09:21:22.095620  552911 host.go:66] Checking if "multinode-473767-m02" exists ...
	I0929 09:21:22.095875  552911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 09:21:22.095918  552911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-473767-m02
	I0929 09:21:22.113440  552911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/21650-382648/.minikube/machines/multinode-473767-m02/id_rsa Username:docker}
	I0929 09:21:22.204846  552911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 09:21:22.216430  552911 status.go:176] multinode-473767-m02 status: &{Name:multinode-473767-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:21:22.216471  552911 status.go:174] checking status of multinode-473767-m03 ...
	I0929 09:21:22.216788  552911 cli_runner.go:164] Run: docker container inspect multinode-473767-m03 --format={{.State.Status}}
	I0929 09:21:22.234442  552911 status.go:371] multinode-473767-m03 host status = "Stopped" (err=<nil>)
	I0929 09:21:22.234462  552911 status.go:384] host is not running, skipping remaining checks
	I0929 09:21:22.234467  552911 status.go:176] multinode-473767-m03 status: &{Name:multinode-473767-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-473767 node start m03 -v=5 --alsologtostderr: (6.731426061s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-473767
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-473767
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-473767: (29.396300404s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-473767 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-473767 --wait=true -v=5 --alsologtostderr: (52.285920186s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-473767
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-473767 node delete m03: (4.64051827s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-473767 stop: (28.37432566s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-473767 status: exit status 7 (87.58953ms)

                                                
                                                
-- stdout --
	multinode-473767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-473767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr: exit status 7 (85.307737ms)

                                                
                                                
-- stdout --
	multinode-473767
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-473767-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 09:23:25.140300  563223 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:23:25.140413  563223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:23:25.140422  563223 out.go:374] Setting ErrFile to fd 2...
	I0929 09:23:25.140426  563223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:23:25.140634  563223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:23:25.140820  563223 out.go:368] Setting JSON to false
	I0929 09:23:25.140874  563223 mustload.go:65] Loading cluster: multinode-473767
	I0929 09:23:25.140975  563223 notify.go:220] Checking for updates...
	I0929 09:23:25.141331  563223 config.go:182] Loaded profile config "multinode-473767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:23:25.141353  563223 status.go:174] checking status of multinode-473767 ...
	I0929 09:23:25.141875  563223 cli_runner.go:164] Run: docker container inspect multinode-473767 --format={{.State.Status}}
	I0929 09:23:25.160115  563223 status.go:371] multinode-473767 host status = "Stopped" (err=<nil>)
	I0929 09:23:25.160140  563223 status.go:384] host is not running, skipping remaining checks
	I0929 09:23:25.160149  563223 status.go:176] multinode-473767 status: &{Name:multinode-473767 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 09:23:25.160203  563223 status.go:174] checking status of multinode-473767-m02 ...
	I0929 09:23:25.160548  563223 cli_runner.go:164] Run: docker container inspect multinode-473767-m02 --format={{.State.Status}}
	I0929 09:23:25.177760  563223 status.go:371] multinode-473767-m02 host status = "Stopped" (err=<nil>)
	I0929 09:23:25.177779  563223 status.go:384] host is not running, skipping remaining checks
	I0929 09:23:25.177785  563223 status.go:176] multinode-473767-m02 status: &{Name:multinode-473767-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-473767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-473767 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.572556675s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-473767 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-473767
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-473767-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-473767-m02 --driver=docker  --container-runtime=crio: exit status 14 (64.539634ms)

                                                
                                                
-- stdout --
	* [multinode-473767-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-473767-m02' is duplicated with machine name 'multinode-473767-m02' in profile 'multinode-473767'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-473767-m03 --driver=docker  --container-runtime=crio
E0929 09:24:18.768166  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-473767-m03 --driver=docker  --container-runtime=crio: (22.038180204s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-473767
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-473767: exit status 80 (272.150857ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-473767 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-473767-m03 already exists in multinode-473767-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-473767-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-473767-m03: (2.297846796s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.72s)

                                                
                                    
x
+
TestPreload (112.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-772994 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0929 09:25:19.893323  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-772994 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.427614926s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-772994 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-772994 image pull gcr.io/k8s-minikube/busybox: (1.374306504s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-772994
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-772994: (5.795263041s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-772994 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0929 09:26:15.700038  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-772994 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.907863406s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-772994 image list
helpers_test.go:175: Cleaning up "test-preload-772994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-772994
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-772994: (2.375153445s)
--- PASS: TestPreload (112.11s)

                                                
                                    
x
+
TestScheduledStopUnix (97.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-086752 --memory=3072 --driver=docker  --container-runtime=crio
E0929 09:26:42.958666  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-086752 --memory=3072 --driver=docker  --container-runtime=crio: (21.528642418s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086752 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-086752 -n scheduled-stop-086752
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086752 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 09:26:56.182272  386225 retry.go:31] will retry after 78.748µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.183483  386225 retry.go:31] will retry after 91.554µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.184630  386225 retry.go:31] will retry after 176.485µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.185774  386225 retry.go:31] will retry after 491.997µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.186878  386225 retry.go:31] will retry after 577.997µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.187997  386225 retry.go:31] will retry after 1.117103ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.190198  386225 retry.go:31] will retry after 865.766µs: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.191337  386225 retry.go:31] will retry after 2.326201ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.194570  386225 retry.go:31] will retry after 1.888033ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.196790  386225 retry.go:31] will retry after 2.848576ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.199969  386225 retry.go:31] will retry after 4.799217ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.205162  386225 retry.go:31] will retry after 8.223432ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.214488  386225 retry.go:31] will retry after 17.260091ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.232734  386225 retry.go:31] will retry after 25.127543ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
I0929 09:26:56.257975  386225 retry.go:31] will retry after 40.753496ms: open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/scheduled-stop-086752/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086752 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086752 -n scheduled-stop-086752
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-086752
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086752 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-086752
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-086752: exit status 7 (69.846508ms)

                                                
                                                
-- stdout --
	scheduled-stop-086752
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086752 -n scheduled-stop-086752
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086752 -n scheduled-stop-086752: exit status 7 (70.73777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-086752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-086752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-086752: (5.063853639s)
--- PASS: TestScheduledStopUnix (97.98s)

                                                
                                    
x
+
TestInsufficientStorage (9.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-706939 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-706939 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.269183706s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b97fe47-0f17-42de-9fe0-26bf33c3adcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-706939] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3812f2e4-90f8-4b5a-8389-0aea1843ffef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21650"}}
	{"specversion":"1.0","id":"240ecd75-2719-4839-991e-94a38f13b72f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"06e03f38-ea4f-4d53-aefe-41142a8298e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig"}}
	{"specversion":"1.0","id":"23fa0b57-f614-48df-8aa1-5ba83e08325f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube"}}
	{"specversion":"1.0","id":"902dd281-8422-4d92-988c-c7ee8b65533d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83ae2b08-5064-4cb6-a4da-8353a8056e64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"113198d3-a5ab-4cc1-9260-92151dc886bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"67092526-3589-4742-8225-f4ac0b602bc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5f967201-498a-4135-a4a3-313579b15adc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fa188c4-0981-497c-90e4-6cf02b747c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8dfc2d4d-f2be-44d4-a0a3-d4efbdf3aaf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-706939\" primary control-plane node in \"insufficient-storage-706939\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b07d0467-5a10-4b34-9aeb-41b2b3e34b90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe5ed719-1548-46ec-bbc3-373bb6926c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a8375b7-e178-4b55-a990-505bf5ef9197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-706939 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-706939 --output=json --layout=cluster: exit status 7 (275.686731ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-706939","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-706939","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 09:28:19.731948  585530 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-706939" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-706939 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-706939 --output=json --layout=cluster: exit status 7 (272.139722ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-706939","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-706939","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 09:28:20.004796  585635 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-706939" does not appear in /home/jenkins/minikube-integration/21650-382648/kubeconfig
	E0929 09:28:20.015618  585635 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/insufficient-storage-706939/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-706939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-706939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-706939: (1.858283688s)
--- PASS: TestInsufficientStorage (9.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1386660449 start -p running-upgrade-515807 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1386660449 start -p running-upgrade-515807 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.093976644s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-515807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-515807 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.21744019s)
helpers_test.go:175: Cleaning up "running-upgrade-515807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-515807
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-515807: (2.453433958s)
--- PASS: TestRunningBinaryUpgrade (46.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.743490166s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-044412
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-044412: (3.10115708s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-044412 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-044412 status --format={{.Host}}: exit status 7 (79.609107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0929 09:31:15.700428  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.979698562s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-044412 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (73.540078ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-044412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-044412
	    minikube start -p kubernetes-upgrade-044412 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0444122 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-044412 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-044412 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.907090087s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-044412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-044412
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-044412: (2.771039849s)
--- PASS: TestKubernetesUpgrade (302.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (74.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1418309260 start -p missing-upgrade-876603 --memory=3072 --driver=docker  --container-runtime=crio
E0929 09:30:19.892584  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1418309260 start -p missing-upgrade-876603 --memory=3072 --driver=docker  --container-runtime=crio: (21.219751585s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-876603
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-876603: (11.658550291s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-876603
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-876603 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-876603 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.284545301s)
helpers_test.go:175: Cleaning up "missing-upgrade-876603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-876603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-876603: (3.888416927s)
--- PASS: TestMissingContainerUpgrade (74.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4081215308 start -p stopped-upgrade-159097 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4081215308 start -p stopped-upgrade-159097 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.052883393s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4081215308 -p stopped-upgrade-159097 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4081215308 -p stopped-upgrade-159097 stop: (4.450887928s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-159097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-159097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.481444336s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-646399 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-646399 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (1.502787258s)

                                                
                                                
-- stdout --
	* [false-646399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 09:28:27.055655  587683 out.go:360] Setting OutFile to fd 1 ...
	I0929 09:28:27.056035  587683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:28:27.056048  587683 out.go:374] Setting ErrFile to fd 2...
	I0929 09:28:27.056055  587683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 09:28:27.056381  587683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21650-382648/.minikube/bin
	I0929 09:28:27.057111  587683 out.go:368] Setting JSON to false
	I0929 09:28:27.058414  587683 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11456,"bootTime":1759126651,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 09:28:27.058512  587683 start.go:140] virtualization: kvm guest
	I0929 09:28:27.190128  587683 out.go:179] * [false-646399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 09:28:27.234434  587683 notify.go:220] Checking for updates...
	I0929 09:28:27.234590  587683 out.go:179]   - MINIKUBE_LOCATION=21650
	I0929 09:28:27.308171  587683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 09:28:27.398874  587683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	I0929 09:28:27.484759  587683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	I0929 09:28:27.652309  587683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 09:28:27.702081  587683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 09:28:27.808878  587683 config.go:182] Loaded profile config "force-systemd-flag-184956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:28:27.809035  587683 config.go:182] Loaded profile config "offline-crio-138244": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I0929 09:28:27.809155  587683 config.go:182] Loaded profile config "stopped-upgrade-159097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0929 09:28:27.809293  587683 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 09:28:27.834448  587683 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 09:28:27.834558  587683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 09:28:27.894394  587683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:58 SystemTime:2025-09-29 09:28:27.884461473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:
[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 09:28:27.894503  587683 docker.go:318] overlay module found
	I0929 09:28:28.007683  587683 out.go:179] * Using the docker driver based on user configuration
	I0929 09:28:28.089325  587683 start.go:304] selected driver: docker
	I0929 09:28:28.089356  587683 start.go:924] validating driver "docker" against <nil>
	I0929 09:28:28.089373  587683 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 09:28:28.165671  587683 out.go:203] 
	W0929 09:28:28.266571  587683 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 09:28:28.372054  587683 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-646399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-646399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-646399"

                                                
                                                
----------------------- debugLogs end: false-646399 [took: 8.366626287s] --------------------------------
helpers_test.go:175: Cleaning up "false-646399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-646399
--- PASS: TestNetworkPlugins/group/false (10.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-159097
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-159097: (1.136918196s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (44.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377924 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-377924 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.670843443s)
--- PASS: TestPause/serial/Start (44.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (67.066398ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-059624] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21650
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21650-382648/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21650-382648/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059624 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059624 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.911216232s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-059624 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.851102016s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-059624 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-059624 status -o json: exit status 2 (298.318544ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-059624","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-059624
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-059624: (1.970472569s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377924 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-377924 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.217678127s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-377924 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-377924 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-377924 --output=json --layout=cluster: exit status 2 (311.804244ms)

                                                
                                                
-- stdout --
	{"Name":"pause-377924","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-377924","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-377924 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.65s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-377924 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.65s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-377924 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-377924 --alsologtostderr -v=5: (2.625589063s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059624 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.2353851s)
--- PASS: TestNoKubernetes/serial/Start (5.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.679915665s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-377924
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-377924: exit status 1 (16.010037ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-377924: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-059624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-059624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.293932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (3.480306201s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-059624
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-059624: (1.628913844s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059624 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059624 --driver=docker  --container-runtime=crio: (6.345272577s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-059624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-059624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.227038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.994335585s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.159210328s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-n59h6" [b444d9e0-4868-4de7-9f50-c413e277895a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004180409s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-646399 "pgrep -a kubelet"
I0929 09:32:18.300805  386225 config.go:182] Loaded profile config "auto-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zjskz" [49cd421c-c2d1-4e33-92e1-346c76ba38d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zjskz" [49cd421c-c2d1-4e33-92e1-346c76ba38d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.045382348s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-646399 "pgrep -a kubelet"
I0929 09:32:19.105778  386225 config.go:182] Loaded profile config "kindnet-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dvncf" [cc3ea9ce-7b12-4628-8428-46fbfc37a92e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dvncf" [cc3ea9ce-7b12-4628-8428-46fbfc37a92e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003829489s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.305491509s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.208605156s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m12.433593003s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-lbttf" [4db28949-401f-4720-b11f-4a4ce73f9ec1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-lbttf" [4db28949-401f-4720-b11f-4a4ce73f9ec1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003671223s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-646399 "pgrep -a kubelet"
I0929 09:33:23.373747  386225 config.go:182] Loaded profile config "calico-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6nwqh" [8cfa2804-e176-4903-9d54-53270bdd7bcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6nwqh" [8cfa2804-e176-4903-9d54-53270bdd7bcb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003430834s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-646399 "pgrep -a kubelet"
I0929 09:33:50.074575  386225 config.go:182] Loaded profile config "custom-flannel-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5pq25" [e505ee62-d94a-480d-b2b7-529b8b53a19a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5pq25" [e505ee62-d94a-480d-b2b7-529b8b53a19a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003678691s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.923007703s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-646399 "pgrep -a kubelet"
I0929 09:34:01.501864  386225 config.go:182] Loaded profile config "enable-default-cni-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fxzfd" [d53d06cd-4799-4501-94dc-b30869fd9451] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fxzfd" [d53d06cd-4799-4501-94dc-b30869fd9451] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003677086s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-646399 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.064475104s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-383226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-383226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.136411317s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-n2xfp" [574705fe-77ce-47fe-aa27-2b64b6f8b7ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003966723s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-646399 "pgrep -a kubelet"
I0929 09:34:53.735088  386225 config.go:182] Loaded profile config "flannel-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rgh6p" [7d0dbc16-8329-403e-aeb8-70e119bed714] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rgh6p" [7d0dbc16-8329-403e-aeb8-70e119bed714] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003908673s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-383226 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [997cfa7c-77b0-484c-87d5-6c1f8f604a3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [997cfa7c-77b0-484c-87d5-6c1f8f604a3c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003699943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-383226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.765281877s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-646399 "pgrep -a kubelet"
I0929 09:35:29.118431  386225 config.go:182] Loaded profile config "bridge-646399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-646399 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4jj7f" [06d9028e-8c36-46e1-bfe7-50ee3343fa2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4jj7f" [06d9028e-8c36-46e1-bfe7-50ee3343fa2f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00430448s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-383226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-383226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-383226 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-383226 --alsologtostderr -v=3: (16.112900428s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-646399 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-646399 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0929 09:55:13.326159  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/custom-flannel-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:19.893149  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/functional-580781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:23.910959  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:23.917351  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:23.928770  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:23.950100  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:23.991510  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:24.072939  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:24.234475  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:24.556025  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:24.762594  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/enable-default-cni-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:25.198045  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:55:26.479507  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-383226 -n old-k8s-version-383226
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-383226 -n old-k8s-version-383226: exit status 7 (88.790702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-383226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-383226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-383226 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.828836264s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-383226 -n old-k8s-version-383226
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.890100967s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.752531147s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-463478 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [83fcbd06-7775-42c1-9a96-da6f3c258960] Pending
helpers_test.go:352: "busybox" [83fcbd06-7775-42c1-9a96-da6f3c258960] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [83fcbd06-7775-42c1-9a96-da6f3c258960] Running
E0929 09:36:15.700595  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/addons-051783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003634975s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-463478 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-463478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-463478 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-463478 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-463478 --alsologtostderr -v=3: (18.245337164s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-879079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-879079 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-879079 --alsologtostderr -v=3: (2.558973342s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-879079 -n newest-cni-879079
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-879079 -n newest-cni-879079: exit status 7 (72.87947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-879079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-879079 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.200674232s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-879079 -n newest-cni-879079
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-463478 -n embed-certs-463478: exit status 7 (69.771516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-463478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-463478 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.192294897s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-463478 -n embed-certs-463478
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-879079 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I0929 09:36:48.646586  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:36:48.840144  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:36:49.020043  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-879079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-879079 -n newest-cni-879079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-879079 -n newest-cni-879079: exit status 2 (302.98245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-879079 -n newest-cni-879079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-879079 -n newest-cni-879079: exit status 2 (302.814553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-879079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-879079 -n newest-cni-879079
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-879079 -n newest-cni-879079
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-730717 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [45841aab-f04d-4957-bb1e-97651d0b8857] Pending
helpers_test.go:352: "busybox" [45841aab-f04d-4957-bb1e-97651d0b8857] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [45841aab-f04d-4957-bb1e-97651d0b8857] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004624193s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-730717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.583819714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-730717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.645083517s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-730717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-730717 --alsologtostderr -v=3
E0929 09:37:12.828707  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:12.835165  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:12.849945  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:12.871335  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:12.912752  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:12.994945  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:13.156503  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:13.478017  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:14.120078  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:15.401455  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-730717 --alsologtostderr -v=3: (16.302827943s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-730717 -n no-preload-730717: exit status 7 (71.286773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-730717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E0929 09:37:17.963382  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.485555  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.491921  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.503240  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.524623  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.566025  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.647311  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:18.808812  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:19.130924  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:19.772978  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:21.055239  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:23.085186  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:23.618076  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-730717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.846670928s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-730717 -n no-preload-730717
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d95cf908-71ca-4221-a13c-1b444a1b843b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d95cf908-71ca-4221-a13c-1b444a1b843b] Running
E0929 09:37:38.981921  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003232595s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-547715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-547715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-547715 --alsologtostderr -v=3
E0929 09:37:53.809046  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/kindnet-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 09:37:59.464246  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/auto-646399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-547715 --alsologtostderr -v=3: (18.138336356s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715: exit status 7 (88.360444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-547715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-547715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.894293624s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-383226 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-383226 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226: exit status 2 (298.963068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-383226 -n old-k8s-version-383226
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-383226 -n old-k8s-version-383226: exit status 2 (305.720712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-383226 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-383226 -n old-k8s-version-383226
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-383226 -n old-k8s-version-383226
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-463478 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I0929 09:55:31.195432  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:55:31.334539  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:55:31.473448  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-463478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478: exit status 2 (299.275501ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-463478 -n embed-certs-463478: exit status 2 (294.860134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-463478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-463478 -n embed-certs-463478
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-463478 -n embed-certs-463478
E0929 09:55:34.163360  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/old-k8s-version-383226/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-730717 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I0929 09:56:16.357292  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:56:16.506966  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:56:16.640397  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-730717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717: exit status 2 (290.415653ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-730717 -n no-preload-730717: exit status 2 (295.430458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-730717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-730717 -n no-preload-730717
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-730717 -n no-preload-730717
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-547715 image list --format=json
E0929 09:56:59.735277  386225 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21650-382648/.minikube/profiles/no-preload-730717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I0929 09:56:59.931216  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:57:00.071535  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
I0929 09:57:00.280283  386225 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-547715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715: exit status 2 (291.748042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715: exit status 2 (292.451871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-547715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-547715 -n default-k8s-diff-port-547715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.60s)

                                                
                                    

Test skip (27/331)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-051783 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-646399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-646399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-646399"

                                                
                                                
----------------------- debugLogs end: kubenet-646399 [took: 4.413443938s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-646399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-646399
--- SKIP: TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-646399 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-646399" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-646399

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-646399" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-646399"

                                                
                                                
----------------------- debugLogs end: cilium-646399 [took: 4.751589752s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-646399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-646399
--- SKIP: TestNetworkPlugins/group/cilium (4.91s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-940348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-940348
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard