Test Report: Docker_Linux_crio_arm64 21656

                    
                      8fdbaae537091671bd14dcf95cc23073d72e85b2:2025-09-29:41680
                    
                

Test fail (11/325)

x
+
TestAddons/parallel/Ingress (152.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-571100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-571100 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-571100 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4510b2b1-0036-461a-8366-4a4ae266d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4510b2b1-0036-461a-8366-4a4ae266d5b8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009735512s
I0929 11:25:19.306797  294425 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-571100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.620538864s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-571100 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-571100
helpers_test.go:243: (dbg) docker inspect addons-571100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814",
	        "Created": "2025-09-29T11:20:54.523824929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:20:54.601164815Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814/hosts",
	        "LogPath": "/var/lib/docker/containers/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814-json.log",
	        "Name": "/addons-571100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-571100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-571100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814",
	                "LowerDir": "/var/lib/docker/overlay2/d39c67c5d242e0a40af4a503150e5c40f59e7955505d47569d42321d19f80f9d-init/diff:/var/lib/docker/overlay2/83e06d49de89e61a1046432dce270924281d24e14aa4bd929fb6d16b3962f5cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d39c67c5d242e0a40af4a503150e5c40f59e7955505d47569d42321d19f80f9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d39c67c5d242e0a40af4a503150e5c40f59e7955505d47569d42321d19f80f9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d39c67c5d242e0a40af4a503150e5c40f59e7955505d47569d42321d19f80f9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-571100",
	                "Source": "/var/lib/docker/volumes/addons-571100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-571100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-571100",
	                "name.minikube.sigs.k8s.io": "addons-571100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d0028e32a8ccea0cf0811d8a049260dcab07630b15a4bae222bdb2421dbb935",
	            "SandboxKey": "/var/run/docker/netns/5d0028e32a8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-571100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:04:2d:69:7c:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b613c0038baacef00b5c0777f749ea6ef3f8257cbd316201e8969dafad66c67c",
	                    "EndpointID": "91eb5aac7babae09d671934f46527202d345a66a6cc01b01393108237a5774b6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-571100",
	                        "1bbe4232e530"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-571100 -n addons-571100
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 logs -n 25: (1.670822097s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-274224                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-274224 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ start   │ --download-only -p binary-mirror-752167 --alsologtostderr --binary-mirror http://127.0.0.1:35747 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-752167   │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ delete  │ -p binary-mirror-752167                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-752167   │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ enable dashboard -p addons-571100                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-571100                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ start   │ -p addons-571100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:23 UTC │
	│ addons  │ addons-571100 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:23 UTC │ 29 Sep 25 11:23 UTC │
	│ addons  │ addons-571100 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:23 UTC │ 29 Sep 25 11:23 UTC │
	│ addons  │ addons-571100 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:23 UTC │ 29 Sep 25 11:23 UTC │
	│ ip      │ addons-571100 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ enable headlamp -p addons-571100 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ ssh     │ addons-571100 ssh cat /opt/local-path-provisioner/pvc-a0cc8561-b2cc-4ce3-a55e-46c1fefc7753_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:24 UTC │ 29 Sep 25 11:24 UTC │
	│ addons  │ addons-571100 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-571100 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-571100 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-571100 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ ssh     │ addons-571100 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-571100                                                                                                                                                                                                                                                                                                                                                                                           │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-571100 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ ip      │ addons-571100 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-571100          │ jenkins │ v1.37.0 │ 29 Sep 25 11:27 UTC │ 29 Sep 25 11:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:28.985312  295191 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:28.985487  295191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:28.985496  295191 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:28.985502  295191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:28.985776  295191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:20:28.986231  295191 out.go:368] Setting JSON to false
	I0929 11:20:28.987035  295191 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3780,"bootTime":1759141049,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:20:28.987109  295191 start.go:140] virtualization:  
	I0929 11:20:28.990280  295191 out.go:179] * [addons-571100] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:20:28.993942  295191 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:20:28.994046  295191 notify.go:220] Checking for updates...
	I0929 11:20:28.999670  295191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:29.002419  295191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:20:29.005332  295191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:20:29.007999  295191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:20:29.010799  295191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:29.013789  295191 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:29.034835  295191 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:20:29.034948  295191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:29.108147  295191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-29 11:20:29.098323399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:29.108270  295191 docker.go:318] overlay module found
	I0929 11:20:29.111445  295191 out.go:179] * Using the docker driver based on user configuration
	I0929 11:20:29.114197  295191 start.go:304] selected driver: docker
	I0929 11:20:29.114218  295191 start.go:924] validating driver "docker" against <nil>
	I0929 11:20:29.114233  295191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:29.114932  295191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:29.171560  295191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-29 11:20:29.16264656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:29.171747  295191 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:20:29.171976  295191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:20:29.174957  295191 out.go:179] * Using Docker driver with root privileges
	I0929 11:20:29.177822  295191 cni.go:84] Creating CNI manager for ""
	I0929 11:20:29.177904  295191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:20:29.177913  295191 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:20:29.177993  295191 start.go:348] cluster config:
	{Name:addons-571100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-571100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 11:20:29.183183  295191 out.go:179] * Starting "addons-571100" primary control-plane node in "addons-571100" cluster
	I0929 11:20:29.186076  295191 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:20:29.189025  295191 out.go:179] * Pulling base image v0.0.48 ...
	I0929 11:20:29.191916  295191 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:29.191981  295191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:20:29.192011  295191 cache.go:58] Caching tarball of preloaded images
	I0929 11:20:29.192009  295191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:20:29.192097  295191 preload.go:172] Found /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 11:20:29.192107  295191 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:20:29.192462  295191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/config.json ...
	I0929 11:20:29.192492  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/config.json: {Name:mk257efb88a10525c85657171cdbacaa09774267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:29.209268  295191 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:20:29.209404  295191 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:20:29.209428  295191 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 11:20:29.209433  295191 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 11:20:29.209441  295191 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 11:20:29.209451  295191 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 11:20:47.349163  295191 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 11:20:47.349205  295191 cache.go:232] Successfully downloaded all kic artifacts
	I0929 11:20:47.349235  295191 start.go:360] acquireMachinesLock for addons-571100: {Name:mk1ea2a6d0fa79a59dc4a645095596946c9b15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:20:47.349357  295191 start.go:364] duration metric: took 98.714µs to acquireMachinesLock for "addons-571100"
	I0929 11:20:47.349387  295191 start.go:93] Provisioning new machine with config: &{Name:addons-571100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-571100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:20:47.349471  295191 start.go:125] createHost starting for "" (driver="docker")
	I0929 11:20:47.352915  295191 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 11:20:47.353148  295191 start.go:159] libmachine.API.Create for "addons-571100" (driver="docker")
	I0929 11:20:47.353187  295191 client.go:168] LocalClient.Create starting
	I0929 11:20:47.353305  295191 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem
	I0929 11:20:47.706560  295191 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem
	I0929 11:20:47.896215  295191 cli_runner.go:164] Run: docker network inspect addons-571100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 11:20:47.913195  295191 cli_runner.go:211] docker network inspect addons-571100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 11:20:47.913283  295191 network_create.go:284] running [docker network inspect addons-571100] to gather additional debugging logs...
	I0929 11:20:47.913303  295191 cli_runner.go:164] Run: docker network inspect addons-571100
	W0929 11:20:47.929363  295191 cli_runner.go:211] docker network inspect addons-571100 returned with exit code 1
	I0929 11:20:47.929395  295191 network_create.go:287] error running [docker network inspect addons-571100]: docker network inspect addons-571100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-571100 not found
	I0929 11:20:47.929409  295191 network_create.go:289] output of [docker network inspect addons-571100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-571100 not found
	
	** /stderr **
	I0929 11:20:47.929502  295191 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:20:47.948883  295191 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1dbe0}
	I0929 11:20:47.948934  295191 network_create.go:124] attempt to create docker network addons-571100 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 11:20:47.948997  295191 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-571100 addons-571100
	I0929 11:20:48.016756  295191 network_create.go:108] docker network addons-571100 192.168.49.0/24 created
	I0929 11:20:48.016789  295191 kic.go:121] calculated static IP "192.168.49.2" for the "addons-571100" container
	I0929 11:20:48.016877  295191 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 11:20:48.031801  295191 cli_runner.go:164] Run: docker volume create addons-571100 --label name.minikube.sigs.k8s.io=addons-571100 --label created_by.minikube.sigs.k8s.io=true
	I0929 11:20:48.051257  295191 oci.go:103] Successfully created a docker volume addons-571100
	I0929 11:20:48.051371  295191 cli_runner.go:164] Run: docker run --rm --name addons-571100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-571100 --entrypoint /usr/bin/test -v addons-571100:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 11:20:50.029426  295191 cli_runner.go:217] Completed: docker run --rm --name addons-571100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-571100 --entrypoint /usr/bin/test -v addons-571100:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (1.978007819s)
	I0929 11:20:50.029460  295191 oci.go:107] Successfully prepared a docker volume addons-571100
	I0929 11:20:50.029483  295191 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:50.029502  295191 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 11:20:50.029578  295191 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-571100:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 11:20:54.453889  295191 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-571100:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.424274114s)
	I0929 11:20:54.453940  295191 kic.go:203] duration metric: took 4.424434391s to extract preloaded images to volume ...
	W0929 11:20:54.454071  295191 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 11:20:54.454189  295191 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 11:20:54.510144  295191 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-571100 --name addons-571100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-571100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-571100 --network addons-571100 --ip 192.168.49.2 --volume addons-571100:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 11:20:54.825950  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Running}}
	I0929 11:20:54.849578  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:20:54.874108  295191 cli_runner.go:164] Run: docker exec addons-571100 stat /var/lib/dpkg/alternatives/iptables
	I0929 11:20:54.923430  295191 oci.go:144] the created container "addons-571100" has a running status.
	I0929 11:20:54.923463  295191 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa...
	I0929 11:20:55.681699  295191 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 11:20:55.708159  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:20:55.729440  295191 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 11:20:55.729459  295191 kic_runner.go:114] Args: [docker exec --privileged addons-571100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 11:20:55.772967  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:20:55.798166  295191 machine.go:93] provisionDockerMachine start ...
	I0929 11:20:55.798266  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:55.816397  295191 main.go:141] libmachine: Using SSH client type: native
	I0929 11:20:55.816720  295191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0929 11:20:55.816729  295191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:20:55.955926  295191 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-571100
	
	I0929 11:20:55.955954  295191 ubuntu.go:182] provisioning hostname "addons-571100"
	I0929 11:20:55.956028  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:55.974048  295191 main.go:141] libmachine: Using SSH client type: native
	I0929 11:20:55.974413  295191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0929 11:20:55.974429  295191 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-571100 && echo "addons-571100" | sudo tee /etc/hostname
	I0929 11:20:56.128911  295191 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-571100
	
	I0929 11:20:56.129035  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:56.147454  295191 main.go:141] libmachine: Using SSH client type: native
	I0929 11:20:56.147760  295191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0929 11:20:56.147783  295191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-571100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-571100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-571100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:20:56.288447  295191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:20:56.288476  295191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-292570/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-292570/.minikube}
	I0929 11:20:56.288502  295191 ubuntu.go:190] setting up certificates
	I0929 11:20:56.288519  295191 provision.go:84] configureAuth start
	I0929 11:20:56.288595  295191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-571100
	I0929 11:20:56.305248  295191 provision.go:143] copyHostCerts
	I0929 11:20:56.305340  295191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem (1675 bytes)
	I0929 11:20:56.305484  295191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem (1078 bytes)
	I0929 11:20:56.305550  295191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem (1123 bytes)
	I0929 11:20:56.305607  295191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem org=jenkins.addons-571100 san=[127.0.0.1 192.168.49.2 addons-571100 localhost minikube]
	I0929 11:20:56.499837  295191 provision.go:177] copyRemoteCerts
	I0929 11:20:56.499902  295191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:20:56.499947  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:56.516773  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:20:56.617564  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 11:20:56.642180  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:20:56.666718  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:20:56.689722  295191 provision.go:87] duration metric: took 401.173083ms to configureAuth
	I0929 11:20:56.689800  295191 ubuntu.go:206] setting minikube options for container-runtime
	I0929 11:20:56.690005  295191 config.go:182] Loaded profile config "addons-571100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:20:56.690116  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:56.708116  295191 main.go:141] libmachine: Using SSH client type: native
	I0929 11:20:56.708593  295191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0929 11:20:56.708619  295191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:20:56.961609  295191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:20:56.961636  295191 machine.go:96] duration metric: took 1.163445467s to provisionDockerMachine
	I0929 11:20:56.961646  295191 client.go:171] duration metric: took 9.608447067s to LocalClient.Create
	I0929 11:20:56.961658  295191 start.go:167] duration metric: took 9.608511641s to libmachine.API.Create "addons-571100"
	I0929 11:20:56.961665  295191 start.go:293] postStartSetup for "addons-571100" (driver="docker")
	I0929 11:20:56.961675  295191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:20:56.961739  295191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:20:56.961792  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:56.978956  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:20:57.077358  295191 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:20:57.080523  295191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 11:20:57.080561  295191 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 11:20:57.080578  295191 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 11:20:57.080584  295191 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 11:20:57.080594  295191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/addons for local assets ...
	I0929 11:20:57.080670  295191 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/files for local assets ...
	I0929 11:20:57.080697  295191 start.go:296] duration metric: took 119.026419ms for postStartSetup
	I0929 11:20:57.081043  295191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-571100
	I0929 11:20:57.098058  295191 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/config.json ...
	I0929 11:20:57.098350  295191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:20:57.098407  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:57.115670  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:20:57.213102  295191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 11:20:57.217338  295191 start.go:128] duration metric: took 9.867847124s to createHost
	I0929 11:20:57.217360  295191 start.go:83] releasing machines lock for "addons-571100", held for 9.867989629s
	I0929 11:20:57.217435  295191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-571100
	I0929 11:20:57.234065  295191 ssh_runner.go:195] Run: cat /version.json
	I0929 11:20:57.234139  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:57.234384  295191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:20:57.234447  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:20:57.253218  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:20:57.264846  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:20:57.477067  295191 ssh_runner.go:195] Run: systemctl --version
	I0929 11:20:57.481142  295191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:20:57.623006  295191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 11:20:57.627791  295191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:20:57.648736  295191 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 11:20:57.648862  295191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:20:57.682788  295191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 11:20:57.682854  295191 start.go:495] detecting cgroup driver to use...
	I0929 11:20:57.682907  295191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 11:20:57.682976  295191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:20:57.699301  295191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:20:57.711418  295191 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:20:57.711484  295191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:20:57.724932  295191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:20:57.738627  295191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:20:57.817183  295191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:20:57.906619  295191 docker.go:234] disabling docker service ...
	I0929 11:20:57.906684  295191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:20:57.927267  295191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:20:57.939232  295191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:20:58.028909  295191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:20:58.123174  295191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:20:58.136030  295191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:20:58.153126  295191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:20:58.153249  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.163108  295191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:20:58.163221  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.173242  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.184018  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.194250  295191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:20:58.203448  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.213134  295191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.228401  295191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:20:58.238100  295191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:20:58.246483  295191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:20:58.254827  295191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:20:58.333552  295191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:20:58.450830  295191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:20:58.450975  295191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:20:58.454677  295191 start.go:563] Will wait 60s for crictl version
	I0929 11:20:58.454778  295191 ssh_runner.go:195] Run: which crictl
	I0929 11:20:58.458042  295191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:20:58.506455  295191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 11:20:58.506614  295191 ssh_runner.go:195] Run: crio --version
	I0929 11:20:58.544305  295191 ssh_runner.go:195] Run: crio --version
	I0929 11:20:58.591631  295191 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 11:20:58.594643  295191 cli_runner.go:164] Run: docker network inspect addons-571100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:20:58.610127  295191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 11:20:58.613537  295191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:20:58.624056  295191 kubeadm.go:875] updating cluster {Name:addons-571100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-571100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:20:58.624180  295191 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:58.624270  295191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:20:58.700684  295191 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:20:58.700709  295191 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:20:58.700767  295191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:20:58.735517  295191 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:20:58.735540  295191 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:20:58.735548  295191 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 11:20:58.735633  295191 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-571100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-571100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:20:58.735716  295191 ssh_runner.go:195] Run: crio config
	I0929 11:20:58.784236  295191 cni.go:84] Creating CNI manager for ""
	I0929 11:20:58.784255  295191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:20:58.784266  295191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:20:58.784300  295191 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-571100 NodeName:addons-571100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:20:58.784429  295191 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-571100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:20:58.784501  295191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:20:58.793077  295191 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:20:58.793148  295191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:20:58.801723  295191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 11:20:58.819261  295191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:20:58.836825  295191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 11:20:58.854609  295191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 11:20:58.857975  295191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:20:58.868271  295191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:20:58.952523  295191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:20:58.966006  295191 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100 for IP: 192.168.49.2
	I0929 11:20:58.966030  295191 certs.go:194] generating shared ca certs ...
	I0929 11:20:58.966058  295191 certs.go:226] acquiring lock for ca certs: {Name:mkd338253a13587776ce07e6238e0355c4b0e958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:58.966801  295191 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key
	I0929 11:20:59.941649  295191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt ...
	I0929 11:20:59.941682  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt: {Name:mkff20305a16702adc6b3f4f3dde4ca252d6df9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:59.942481  295191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key ...
	I0929 11:20:59.942500  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key: {Name:mk48c83f748a97fb6dea8e73047f6e378293c088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:59.942604  295191 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key
	I0929 11:21:00.336569  295191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt ...
	I0929 11:21:00.336608  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt: {Name:mkd70bb6c495fa2e3441e670f920e9db72728887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:00.337088  295191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key ...
	I0929 11:21:00.338730  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key: {Name:mk7b6be517afdb0fc4bd36b96469b8f7417cd08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:00.338988  295191 certs.go:256] generating profile certs ...
	I0929 11:21:00.346373  295191 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.key
	I0929 11:21:00.346405  295191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt with IP's: []
	I0929 11:21:00.884170  295191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt ...
	I0929 11:21:00.884206  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: {Name:mkd678c5576a5320ebe12f30160dc37c4b0775eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:00.885070  295191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.key ...
	I0929 11:21:00.885087  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.key: {Name:mk19a32f75c3a397d35759233e742a18df89e5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:00.885755  295191 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key.446fe5f1
	I0929 11:21:00.885782  295191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt.446fe5f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 11:21:01.195481  295191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt.446fe5f1 ...
	I0929 11:21:01.195516  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt.446fe5f1: {Name:mk56345b03f98b2479ab3aac6d1c7964f062eef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:01.196277  295191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key.446fe5f1 ...
	I0929 11:21:01.196314  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key.446fe5f1: {Name:mk3f519cb0f404af04286fa31ae222992c892e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:01.196419  295191 certs.go:381] copying /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt.446fe5f1 -> /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt
	I0929 11:21:01.196509  295191 certs.go:385] copying /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key.446fe5f1 -> /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key
	I0929 11:21:01.196565  295191 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.key
	I0929 11:21:01.196586  295191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.crt with IP's: []
	I0929 11:21:01.492917  295191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.crt ...
	I0929 11:21:01.492947  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.crt: {Name:mkd8e87938dc074ea6262158039f2d073fc2663b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:01.493128  295191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.key ...
	I0929 11:21:01.493142  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.key: {Name:mkd813528cf6d4539af46b8deeb05a50f75170ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:01.493933  295191 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 11:21:01.493975  295191 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem (1078 bytes)
	I0929 11:21:01.494004  295191 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:21:01.494032  295191 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem (1675 bytes)
	I0929 11:21:01.494633  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:21:01.519176  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 11:21:01.543686  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:21:01.570417  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:21:01.597089  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:21:01.623687  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:21:01.650588  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:21:01.678876  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:21:01.707450  295191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:21:01.734365  295191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:21:01.757743  295191 ssh_runner.go:195] Run: openssl version
	I0929 11:21:01.764123  295191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:21:01.773478  295191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:21:01.777166  295191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:21:01.777236  295191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:21:01.785201  295191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:21:01.797619  295191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:21:01.801760  295191 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:21:01.801838  295191 kubeadm.go:392] StartCluster: {Name:addons-571100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-571100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:21:01.801934  295191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:21:01.802006  295191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:21:01.846855  295191 cri.go:89] found id: ""
	I0929 11:21:01.847018  295191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:21:01.856300  295191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:21:01.865451  295191 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 11:21:01.865520  295191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:21:01.877415  295191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:21:01.877435  295191 kubeadm.go:157] found existing configuration files:
	
	I0929 11:21:01.877522  295191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:21:01.889708  295191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:21:01.889811  295191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:21:01.899069  295191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:21:01.908194  295191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:21:01.908300  295191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:21:01.919056  295191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:21:01.931397  295191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:21:01.931485  295191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:21:01.941234  295191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:21:01.949871  295191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:21:01.949937  295191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:21:01.959743  295191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 11:21:02.008140  295191 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:21:02.008567  295191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:21:02.032915  295191 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 11:21:02.032989  295191 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 11:21:02.033028  295191 kubeadm.go:310] OS: Linux
	I0929 11:21:02.033078  295191 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 11:21:02.033129  295191 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 11:21:02.033180  295191 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 11:21:02.033232  295191 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 11:21:02.033282  295191 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 11:21:02.033334  295191 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 11:21:02.033385  295191 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 11:21:02.033437  295191 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 11:21:02.033486  295191 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 11:21:02.109158  295191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:21:02.109272  295191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:21:02.109367  295191 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:21:02.116859  295191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:21:02.121354  295191 out.go:252]   - Generating certificates and keys ...
	I0929 11:21:02.121452  295191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:21:02.121524  295191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:21:02.669851  295191 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:21:02.985660  295191 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:21:03.231677  295191 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:21:04.323326  295191 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:21:05.239782  295191 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:21:05.239917  295191 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-571100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 11:21:06.326446  295191 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:21:06.326621  295191 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-571100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 11:21:06.909891  295191 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:21:07.639885  295191 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:21:08.539521  295191 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:21:08.539833  295191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:21:08.965078  295191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:21:10.198268  295191 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:21:10.629191  295191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:21:11.301070  295191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:21:11.397905  295191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:21:11.398642  295191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:21:11.403285  295191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:21:11.406666  295191 out.go:252]   - Booting up control plane ...
	I0929 11:21:11.406768  295191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:21:11.406862  295191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:21:11.407257  295191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:21:11.417635  295191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:21:11.418052  295191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:21:11.425412  295191 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:21:11.425729  295191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:21:11.425783  295191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:21:11.513322  295191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:21:11.513452  295191 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:21:13.016688  295191 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502236496s
	I0929 11:21:13.020702  295191 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:21:13.020800  295191 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 11:21:13.020894  295191 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:21:13.020976  295191 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:21:16.909294  295191 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.888924593s
	I0929 11:21:17.173339  295191 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.153522062s
	I0929 11:21:18.022249  295191 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002219143s
	I0929 11:21:18.044633  295191 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:21:18.065711  295191 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:21:18.081908  295191 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:21:18.082127  295191 kubeadm.go:310] [mark-control-plane] Marking the node addons-571100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:21:18.095278  295191 kubeadm.go:310] [bootstrap-token] Using token: 3ksj8n.g9ikhjla5t23vt0o
	I0929 11:21:18.099674  295191 out.go:252]   - Configuring RBAC rules ...
	I0929 11:21:18.099814  295191 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:21:18.106210  295191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:21:18.116985  295191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:21:18.122669  295191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:21:18.126477  295191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:21:18.131180  295191 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:21:18.430203  295191 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:21:18.868669  295191 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:21:19.430655  295191 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:21:19.430678  295191 kubeadm.go:310] 
	I0929 11:21:19.430743  295191 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:21:19.430753  295191 kubeadm.go:310] 
	I0929 11:21:19.430834  295191 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:21:19.430843  295191 kubeadm.go:310] 
	I0929 11:21:19.430870  295191 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:21:19.430935  295191 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:21:19.430992  295191 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:21:19.431000  295191 kubeadm.go:310] 
	I0929 11:21:19.431056  295191 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:21:19.431064  295191 kubeadm.go:310] 
	I0929 11:21:19.431130  295191 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:21:19.431223  295191 kubeadm.go:310] 
	I0929 11:21:19.431284  295191 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:21:19.431370  295191 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:21:19.431447  295191 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:21:19.431455  295191 kubeadm.go:310] 
	I0929 11:21:19.431543  295191 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:21:19.431633  295191 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:21:19.431641  295191 kubeadm.go:310] 
	I0929 11:21:19.431729  295191 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3ksj8n.g9ikhjla5t23vt0o \
	I0929 11:21:19.431839  295191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f05eddd1015f8286cd14da3bbb5f4fa1c9488aa1ea754c6d0a74a9af6ec8883 \
	I0929 11:21:19.431864  295191 kubeadm.go:310] 	--control-plane 
	I0929 11:21:19.431872  295191 kubeadm.go:310] 
	I0929 11:21:19.431961  295191 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:21:19.431970  295191 kubeadm.go:310] 
	I0929 11:21:19.432056  295191 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3ksj8n.g9ikhjla5t23vt0o \
	I0929 11:21:19.432165  295191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f05eddd1015f8286cd14da3bbb5f4fa1c9488aa1ea754c6d0a74a9af6ec8883 
	I0929 11:21:19.436043  295191 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 11:21:19.436283  295191 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 11:21:19.436417  295191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:21:19.436438  295191 cni.go:84] Creating CNI manager for ""
	I0929 11:21:19.436458  295191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:21:19.441512  295191 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 11:21:19.444515  295191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 11:21:19.448153  295191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 11:21:19.448177  295191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 11:21:19.466562  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 11:21:19.740624  295191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:21:19.740768  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:19.740856  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-571100 minikube.k8s.io/updated_at=2025_09_29T11_21_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-571100 minikube.k8s.io/primary=true
	I0929 11:21:19.748072  295191 ops.go:34] apiserver oom_adj: -16
	I0929 11:21:19.899566  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:20.399686  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:20.899747  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:21.399689  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:21.899624  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:22.400546  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:22.900270  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:23.400194  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:23.900516  295191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:21:24.012284  295191 kubeadm.go:1105] duration metric: took 4.271565341s to wait for elevateKubeSystemPrivileges
	I0929 11:21:24.012330  295191 kubeadm.go:394] duration metric: took 22.210496445s to StartCluster
	I0929 11:21:24.012347  295191 settings.go:142] acquiring lock: {Name:mk8da0e06d1edc552f3cec9ed26678491ca734d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:24.012460  295191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:21:24.012900  295191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/kubeconfig: {Name:mk84aa46812be3352ca2874bd06be6025c5058bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:21:24.013098  295191 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:21:24.013232  295191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:21:24.013468  295191 config.go:182] Loaded profile config "addons-571100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:21:24.013504  295191 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:21:24.013577  295191 addons.go:69] Setting yakd=true in profile "addons-571100"
	I0929 11:21:24.013604  295191 addons.go:238] Setting addon yakd=true in "addons-571100"
	I0929 11:21:24.013629  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.014130  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.014598  295191 addons.go:69] Setting metrics-server=true in profile "addons-571100"
	I0929 11:21:24.014622  295191 addons.go:238] Setting addon metrics-server=true in "addons-571100"
	I0929 11:21:24.014646  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.015062  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.018101  295191 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-571100"
	I0929 11:21:24.018172  295191 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-571100"
	I0929 11:21:24.018218  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.018685  295191 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-571100"
	I0929 11:21:24.018863  295191 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-571100"
	I0929 11:21:24.018894  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.019268  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.019401  295191 out.go:179] * Verifying Kubernetes components...
	I0929 11:21:24.018728  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.018734  295191 addons.go:69] Setting registry=true in profile "addons-571100"
	I0929 11:21:24.023134  295191 addons.go:238] Setting addon registry=true in "addons-571100"
	I0929 11:21:24.023179  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.023626  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.026451  295191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:21:24.018740  295191 addons.go:69] Setting registry-creds=true in profile "addons-571100"
	I0929 11:21:24.032355  295191 addons.go:238] Setting addon registry-creds=true in "addons-571100"
	I0929 11:21:24.032427  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.032963  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.018748  295191 addons.go:69] Setting storage-provisioner=true in profile "addons-571100"
	I0929 11:21:24.044831  295191 addons.go:238] Setting addon storage-provisioner=true in "addons-571100"
	I0929 11:21:24.044888  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.018752  295191 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-571100"
	I0929 11:21:24.018755  295191 addons.go:69] Setting volcano=true in profile "addons-571100"
	I0929 11:21:24.018762  295191 addons.go:69] Setting volumesnapshots=true in profile "addons-571100"
	I0929 11:21:24.018803  295191 addons.go:69] Setting cloud-spanner=true in profile "addons-571100"
	I0929 11:21:24.018808  295191 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-571100"
	I0929 11:21:24.018812  295191 addons.go:69] Setting default-storageclass=true in profile "addons-571100"
	I0929 11:21:24.018815  295191 addons.go:69] Setting gcp-auth=true in profile "addons-571100"
	I0929 11:21:24.018818  295191 addons.go:69] Setting ingress=true in profile "addons-571100"
	I0929 11:21:24.018821  295191 addons.go:69] Setting ingress-dns=true in profile "addons-571100"
	I0929 11:21:24.018824  295191 addons.go:69] Setting inspektor-gadget=true in profile "addons-571100"
	I0929 11:21:24.054657  295191 addons.go:238] Setting addon inspektor-gadget=true in "addons-571100"
	I0929 11:21:24.054708  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.055192  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.055384  295191 addons.go:238] Setting addon cloud-spanner=true in "addons-571100"
	I0929 11:21:24.055411  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.055803  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.075537  295191 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-571100"
	I0929 11:21:24.075594  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.076110  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.084071  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.091540  295191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-571100"
	I0929 11:21:24.091904  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.114124  295191 mustload.go:65] Loading cluster: addons-571100
	I0929 11:21:24.114350  295191 config.go:182] Loaded profile config "addons-571100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:21:24.114629  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.131805  295191 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-571100"
	I0929 11:21:24.132308  295191 addons.go:238] Setting addon volcano=true in "addons-571100"
	I0929 11:21:24.132359  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.132933  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.142209  295191 addons.go:238] Setting addon ingress=true in "addons-571100"
	I0929 11:21:24.142276  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.142807  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.155168  295191 addons.go:238] Setting addon volumesnapshots=true in "addons-571100"
	I0929 11:21:24.155228  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.155691  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.168713  295191 addons.go:238] Setting addon ingress-dns=true in "addons-571100"
	I0929 11:21:24.168772  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.169366  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.249324  295191 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:21:24.252213  295191 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:21:24.252253  295191 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:21:24.252619  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.272672  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.291046  295191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:21:24.293938  295191 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:21:24.294084  295191 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:21:24.297743  295191 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:21:24.297776  295191 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:21:24.297847  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.298106  295191 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:21:24.298162  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:21:24.298246  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.294021  295191 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:21:24.334198  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:21:24.334319  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.348604  295191 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:21:24.354510  295191 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:21:24.354536  295191 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:21:24.354622  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.377466  295191 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:21:24.380501  295191 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:21:24.380527  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:21:24.380596  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.394449  295191 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:21:24.396576  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:21:24.400787  295191 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:21:24.400811  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:21:24.400878  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.406451  295191 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:21:24.412474  295191 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:21:24.414112  295191 addons.go:238] Setting addon default-storageclass=true in "addons-571100"
	I0929 11:21:24.414144  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.414818  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.420268  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0929 11:21:24.415512  295191 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 11:21:24.420590  295191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:21:24.420725  295191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:21:24.420801  295191 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:21:24.424849  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:21:24.424922  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.420838  295191 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:21:24.429677  295191 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-571100"
	I0929 11:21:24.429720  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.430282  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:24.447589  295191 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:21:24.453391  295191 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:21:24.456243  295191 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:21:24.456266  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:21:24.456343  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.456520  295191 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:21:24.461275  295191 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:21:24.462698  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:24.467491  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:21:24.467731  295191 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:21:24.467764  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:21:24.467856  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.472222  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.474188  295191 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:21:24.474205  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:21:24.474260  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.474953  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:21:24.475368  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:21:24.475380  295191 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:21:24.475437  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.489180  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:21:24.493230  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:21:24.507558  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:21:24.514649  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:21:24.520541  295191 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:21:24.521066  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.524353  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:21:24.524375  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:21:24.524448  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.556318  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.560564  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.572018  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.622841  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.695035  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.709965  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.710722  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.710875  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.715929  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.719088  295191 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:21:24.719105  295191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:21:24.719168  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.719387  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.719979  295191 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0929 11:21:24.720794  295191 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 11:21:24.720852  295191 retry.go:31] will retry after 136.613994ms: ssh: handshake failed: EOF
	I0929 11:21:24.720933  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.726502  295191 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:21:24.729631  295191 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:21:24.729656  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:21:24.729724  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:24.753746  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:24.774061  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	W0929 11:21:24.858816  295191 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 11:21:24.858892  295191 retry.go:31] will retry after 455.032151ms: ssh: handshake failed: EOF
	I0929 11:21:24.929482  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:21:25.047605  295191 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:21:25.047635  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:21:25.070661  295191 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:25.070687  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:21:25.110404  295191 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:21:25.110443  295191 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:21:25.130639  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:21:25.162394  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:21:25.165713  295191 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:21:25.165780  295191 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:21:25.180723  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:21:25.204449  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:21:25.238787  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:21:25.242450  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:25.245965  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:21:25.253393  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:21:25.265026  295191 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:21:25.265052  295191 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:21:25.276683  295191 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:21:25.276709  295191 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:21:25.292587  295191 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:21:25.292613  295191 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:21:25.317353  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:21:25.346722  295191 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:21:25.346747  295191 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:21:25.416798  295191 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:21:25.416823  295191 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:21:25.476494  295191 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:21:25.476522  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:21:25.483609  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:21:25.583702  295191 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:21:25.583730  295191 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:21:25.686971  295191 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:21:25.686998  295191 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:21:25.696789  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:21:25.775939  295191 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:21:25.775965  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:21:25.856330  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:21:25.856359  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:21:25.903701  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:21:25.903727  295191 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:21:25.969902  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:21:25.996253  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:21:25.996301  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:21:26.024858  295191 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:21:26.024883  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:21:26.080874  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:21:26.080901  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:21:26.109713  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:21:26.146315  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:21:26.146343  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:21:26.237663  295191 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:21:26.237689  295191 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:21:26.259489  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:21:26.259514  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:21:26.310416  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:21:26.310441  295191 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:21:26.452041  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:21:26.452064  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:21:26.514628  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:21:26.514652  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:21:26.671910  295191 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:21:26.671937  295191 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:21:26.916633  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:21:27.697778  295191 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.273113585s)
	I0929 11:21:27.697808  295191 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 11:21:27.698810  295191 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.277917107s)
	I0929 11:21:27.699423  295191 node_ready.go:35] waiting up to 6m0s for node "addons-571100" to be "Ready" ...
	I0929 11:21:28.454407  295191 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-571100" context rescaled to 1 replicas
	I0929 11:21:29.027039  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.097521288s)
	I0929 11:21:29.027109  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.896449462s)
	I0929 11:21:29.027153  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.864676339s)
	W0929 11:21:29.715576  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:30.005119  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.824314291s)
	I0929 11:21:30.005153  295191 addons.go:479] Verifying addon ingress=true in "addons-571100"
	I0929 11:21:30.005288  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.800758277s)
	I0929 11:21:30.005322  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.766471255s)
	I0929 11:21:30.005422  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.762950856s)
	W0929 11:21:30.005441  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:30.005454  295191 retry.go:31] will retry after 343.970351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:30.005481  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.759496171s)
	I0929 11:21:30.005674  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.752253842s)
	I0929 11:21:30.005737  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.688361756s)
	I0929 11:21:30.005852  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.522211346s)
	I0929 11:21:30.005876  295191 addons.go:479] Verifying addon metrics-server=true in "addons-571100"
	I0929 11:21:30.005927  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.309117652s)
	I0929 11:21:30.005938  295191 addons.go:479] Verifying addon registry=true in "addons-571100"
	I0929 11:21:30.005965  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.036035484s)
	I0929 11:21:30.008624  295191 out.go:179] * Verifying ingress addon...
	I0929 11:21:30.010585  295191 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-571100 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:21:30.010627  295191 out.go:179] * Verifying registry addon...
	I0929 11:21:30.013156  295191 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:21:30.015022  295191 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:21:30.022563  295191 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:21:30.022584  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:30.023218  295191 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:21:30.023236  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:30.031486  295191 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 11:21:30.078525  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.968750292s)
	W0929 11:21:30.078590  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:21:30.078612  295191 retry.go:31] will retry after 317.439797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:21:30.285043  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.368359886s)
	I0929 11:21:30.285086  295191 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-571100"
	I0929 11:21:30.290249  295191 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:21:30.293941  295191 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:21:30.300791  295191 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:21:30.300820  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:30.349782  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:30.396237  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:21:30.526700  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:30.526856  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:30.808877  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:31.020369  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:31.020404  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:31.297232  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:31.471806  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121982659s)
	W0929 11:21:31.471852  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:31.471873  295191 retry.go:31] will retry after 226.527756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:31.471966  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.075694306s)
	I0929 11:21:31.517534  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:31.517679  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:31.698629  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:31.804909  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:32.018073  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:32.018186  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:32.203534  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:32.300968  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:32.524970  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:32.525178  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:32.526394  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:32.526423  295191 retry.go:31] will retry after 329.667516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:32.799823  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:32.857213  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:33.018704  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:33.019545  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:33.298304  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:33.520335  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:33.521019  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:33.684221  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:33.684255  295191 retry.go:31] will retry after 807.646073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:33.803197  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:34.016827  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:34.018647  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:34.297459  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:34.492277  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:34.516741  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:34.518573  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:34.702857  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:34.737204  295191 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:21:34.737284  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:34.754780  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:34.806634  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:34.873210  295191 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:21:34.893781  295191 addons.go:238] Setting addon gcp-auth=true in "addons-571100"
	I0929 11:21:34.893831  295191 host.go:66] Checking if "addons-571100" exists ...
	I0929 11:21:34.894261  295191 cli_runner.go:164] Run: docker container inspect addons-571100 --format={{.State.Status}}
	I0929 11:21:34.920342  295191 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:21:34.920406  295191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-571100
	I0929 11:21:34.947227  295191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/addons-571100/id_rsa Username:docker}
	I0929 11:21:35.016192  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:35.018187  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:35.299977  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:21:35.338464  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:35.338493  295191 retry.go:31] will retry after 767.636732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:35.342165  295191 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:21:35.345055  295191 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:21:35.347894  295191 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:21:35.347915  295191 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:21:35.366239  295191 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:21:35.366266  295191 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:21:35.383925  295191 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:21:35.383952  295191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:21:35.402861  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:21:35.517233  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:35.519648  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:35.801308  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:35.883759  295191 addons.go:479] Verifying addon gcp-auth=true in "addons-571100"
	I0929 11:21:35.887349  295191 out.go:179] * Verifying gcp-auth addon...
	I0929 11:21:35.891128  295191 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:21:35.904753  295191 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:21:35.904820  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:36.017925  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:36.018065  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:36.106350  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:36.296888  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:36.396073  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:36.517771  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:36.518138  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:36.703372  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:36.803127  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:36.894541  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:21:36.950100  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:36.950135  295191 retry.go:31] will retry after 1.324476467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:37.015935  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:37.018065  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:37.297602  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:37.394582  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:37.516537  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:37.518693  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:37.800670  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:37.894763  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:38.019786  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:38.019945  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:38.275220  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:38.297421  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:38.395332  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:38.516829  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:38.520648  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:38.799800  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:38.895580  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:39.018986  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:39.019339  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:39.128146  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:39.128178  295191 retry.go:31] will retry after 3.241731873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 11:21:39.203054  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:39.297473  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:39.394955  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:39.516934  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:39.517724  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:39.801154  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:39.894878  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:40.017080  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:40.017472  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:40.297447  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:40.394388  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:40.516644  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:40.518550  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:40.797198  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:40.894923  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:41.017039  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:41.017275  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:41.297074  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:41.394841  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:41.517729  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:41.518564  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:41.702516  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:41.798952  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:41.894524  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:42.016585  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:42.018518  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:42.298296  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:42.370805  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:42.404265  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:42.519839  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:42.522607  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:42.803672  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:42.895427  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:43.017404  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:43.019682  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:43.228648  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:43.228729  295191 retry.go:31] will retry after 5.894872447s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:43.297359  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:43.395239  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:43.517432  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:43.517916  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:43.702641  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:43.800154  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:43.894075  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:44.016472  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:44.018152  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:44.297442  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:44.394292  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:44.516374  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:44.518201  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:44.802885  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:44.894844  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:45.017265  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:45.017798  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:45.303103  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:45.403833  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:45.517425  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:45.518581  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:45.702850  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:45.800692  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:45.894556  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:46.016795  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:46.018510  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:46.297322  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:46.394686  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:46.518079  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:46.518234  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:46.798134  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:46.894016  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:47.015930  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:47.017649  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:47.296673  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:47.394665  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:47.516430  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:47.518162  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:47.703154  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:47.801628  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:47.894221  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:48.016668  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:48.018429  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:48.296974  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:48.394756  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:48.516599  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:48.518218  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:48.797033  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:48.894712  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:49.017151  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:49.017562  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:49.123860  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:49.297074  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:49.395281  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:49.517144  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:49.519159  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:49.703471  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:49.806838  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:49.895023  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:21:49.905849  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:49.905882  295191 retry.go:31] will retry after 4.939655881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:50.017634  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:50.017885  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:50.297340  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:50.394603  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:50.516575  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:50.518269  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:50.801675  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:50.894557  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:51.016176  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:51.017995  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:51.296901  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:51.395586  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:51.516426  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:51.518084  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:51.801232  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:51.894094  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:52.017031  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:52.017912  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:52.202749  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:52.297537  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:52.394746  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:52.516974  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:52.522274  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:52.797430  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:52.894279  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:53.019468  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:53.019766  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:53.296866  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:53.394767  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:53.517187  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:53.518454  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:53.797669  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:53.894465  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:54.016986  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:54.018967  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:54.297661  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:54.394184  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:54.517409  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:54.521126  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:54.703141  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:54.803115  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:54.846243  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:21:54.894328  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:55.017364  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:55.018481  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:55.296679  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:55.395614  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:55.518239  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:55.518447  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:21:55.661694  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:55.661726  295191 retry.go:31] will retry after 9.022312273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:21:55.797839  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:55.894685  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:56.016987  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:56.019252  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:56.296628  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:56.394282  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:56.516243  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:56.518137  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:56.801146  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:56.894069  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:57.017127  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:57.018302  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:57.202060  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:57.296934  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:57.395069  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:57.516786  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:57.517910  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:57.801882  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:57.894615  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:58.016985  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:58.018750  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:58.297410  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:58.394498  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:58.516470  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:58.518453  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:58.797159  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:58.894272  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:59.016536  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:59.018266  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:21:59.202329  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:21:59.297118  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:59.394930  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:21:59.517064  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:21:59.517536  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:21:59.801145  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:21:59.894735  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:00.017246  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:00.017350  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:00.297863  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:00.397775  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:00.517130  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:00.519553  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:00.802055  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:00.897986  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:01.017539  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:01.017566  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:22:01.203448  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:22:01.297366  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:01.394827  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:01.517845  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:01.518211  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:01.803047  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:01.893966  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:02.017348  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:02.017486  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:02.297600  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:02.394828  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:02.516735  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:02.517808  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:02.797848  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:02.894739  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:03.017061  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:03.017791  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:03.298230  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:03.394025  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:03.516684  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:03.518054  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0929 11:22:03.703304  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:22:03.801734  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:03.894711  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:04.017673  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:04.017847  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:04.297430  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:04.394052  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:04.517269  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:04.518316  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:04.684591  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:22:04.799815  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:04.894815  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:05.018298  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:05.018609  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:05.298638  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:05.395628  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:22:05.488634  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:05.488668  295191 retry.go:31] will retry after 8.599088927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:05.516905  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:05.518642  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:05.797811  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:05.894586  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:06.017568  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:06.017791  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:22:06.202985  295191 node_ready.go:57] node "addons-571100" has "Ready":"False" status (will retry)
	I0929 11:22:06.297016  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:06.395542  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:06.517833  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:06.525058  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:06.728213  295191 node_ready.go:49] node "addons-571100" is "Ready"
	I0929 11:22:06.728314  295191 node_ready.go:38] duration metric: took 39.028863881s for node "addons-571100" to be "Ready" ...
	I0929 11:22:06.728357  295191 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:22:06.728434  295191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:22:06.750137  295191 api_server.go:72] duration metric: took 42.737001499s to wait for apiserver process to appear ...
	I0929 11:22:06.750204  295191 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:22:06.750239  295191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 11:22:06.762138  295191 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 11:22:06.770743  295191 api_server.go:141] control plane version: v1.34.0
	I0929 11:22:06.770814  295191 api_server.go:131] duration metric: took 20.588977ms to wait for apiserver health ...
	I0929 11:22:06.770838  295191 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:22:06.812402  295191 system_pods.go:59] 19 kube-system pods found
	I0929 11:22:06.812490  295191 system_pods.go:61] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Pending
	I0929 11:22:06.812511  295191 system_pods.go:61] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending
	I0929 11:22:06.812533  295191 system_pods.go:61] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending
	I0929 11:22:06.812570  295191 system_pods.go:61] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending
	I0929 11:22:06.812592  295191 system_pods.go:61] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:06.812615  295191 system_pods.go:61] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:06.812647  295191 system_pods.go:61] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:06.812673  295191 system_pods.go:61] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:06.812695  295191 system_pods.go:61] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending
	I0929 11:22:06.812719  295191 system_pods.go:61] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:06.812750  295191 system_pods.go:61] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:06.812774  295191 system_pods.go:61] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending
	I0929 11:22:06.812795  295191 system_pods.go:61] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending
	I0929 11:22:06.812821  295191 system_pods.go:61] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:06.812852  295191 system_pods.go:61] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending
	I0929 11:22:06.812880  295191 system_pods.go:61] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending
	I0929 11:22:06.812903  295191 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending
	I0929 11:22:06.812928  295191 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending
	I0929 11:22:06.812959  295191 system_pods.go:61] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Pending
	I0929 11:22:06.812987  295191 system_pods.go:74] duration metric: took 42.127963ms to wait for pod list to return data ...
	I0929 11:22:06.813011  295191 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:22:06.828651  295191 default_sa.go:45] found service account: "default"
	I0929 11:22:06.828718  295191 default_sa.go:55] duration metric: took 15.685445ms for default service account to be created ...
	I0929 11:22:06.828742  295191 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:22:06.829238  295191 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:22:06.829283  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:06.851817  295191 system_pods.go:86] 19 kube-system pods found
	I0929 11:22:06.851921  295191 system_pods.go:89] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:22:06.851947  295191 system_pods.go:89] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending
	I0929 11:22:06.851980  295191 system_pods.go:89] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending
	I0929 11:22:06.852019  295191 system_pods.go:89] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending
	I0929 11:22:06.852040  295191 system_pods.go:89] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:06.852076  295191 system_pods.go:89] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:06.852104  295191 system_pods.go:89] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:06.852136  295191 system_pods.go:89] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:06.852156  295191 system_pods.go:89] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending
	I0929 11:22:06.852180  295191 system_pods.go:89] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:06.852216  295191 system_pods.go:89] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:06.852242  295191 system_pods.go:89] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:22:06.852260  295191 system_pods.go:89] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending
	I0929 11:22:06.852314  295191 system_pods.go:89] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:06.852344  295191 system_pods.go:89] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending
	I0929 11:22:06.852372  295191 system_pods.go:89] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending
	I0929 11:22:06.852396  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending
	I0929 11:22:06.852431  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending
	I0929 11:22:06.852453  295191 system_pods.go:89] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Pending
	I0929 11:22:06.852482  295191 retry.go:31] will retry after 234.081173ms: missing components: kube-dns
	I0929 11:22:06.968265  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:07.068901  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:07.072709  295191 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:22:07.072773  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:07.113079  295191 system_pods.go:86] 19 kube-system pods found
	I0929 11:22:07.113163  295191 system_pods.go:89] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:22:07.113187  295191 system_pods.go:89] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending
	I0929 11:22:07.113223  295191 system_pods.go:89] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending
	I0929 11:22:07.113246  295191 system_pods.go:89] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending
	I0929 11:22:07.113267  295191 system_pods.go:89] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:07.113289  295191 system_pods.go:89] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:07.113321  295191 system_pods.go:89] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:07.113348  295191 system_pods.go:89] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:07.113374  295191 system_pods.go:89] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending
	I0929 11:22:07.113393  295191 system_pods.go:89] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:07.113421  295191 system_pods.go:89] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:07.113447  295191 system_pods.go:89] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:22:07.113470  295191 system_pods.go:89] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:22:07.113493  295191 system_pods.go:89] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:07.113527  295191 system_pods.go:89] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:22:07.113551  295191 system_pods.go:89] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending
	I0929 11:22:07.113571  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending
	I0929 11:22:07.113592  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending
	I0929 11:22:07.113624  295191 system_pods.go:89] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Pending
	I0929 11:22:07.113655  295191 retry.go:31] will retry after 352.915417ms: missing components: kube-dns
	I0929 11:22:07.346331  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:07.430014  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:07.496017  295191 system_pods.go:86] 19 kube-system pods found
	I0929 11:22:07.496489  295191 system_pods.go:89] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:22:07.496537  295191 system_pods.go:89] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:22:07.496568  295191 system_pods.go:89] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:22:07.496590  295191 system_pods.go:89] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:22:07.496625  295191 system_pods.go:89] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:07.496654  295191 system_pods.go:89] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:07.496673  295191 system_pods.go:89] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:07.496705  295191 system_pods.go:89] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:07.496731  295191 system_pods.go:89] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending
	I0929 11:22:07.496749  295191 system_pods.go:89] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:07.496774  295191 system_pods.go:89] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:07.496808  295191 system_pods.go:89] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:22:07.496840  295191 system_pods.go:89] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:22:07.496865  295191 system_pods.go:89] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:07.496963  295191 system_pods.go:89] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:22:07.496988  295191 system_pods.go:89] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending
	I0929 11:22:07.496996  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending
	I0929 11:22:07.497016  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:22:07.497033  295191 system_pods.go:89] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:22:07.497052  295191 retry.go:31] will retry after 427.794159ms: missing components: kube-dns
	I0929 11:22:07.539258  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:07.544712  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:07.806684  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:07.894663  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:07.929866  295191 system_pods.go:86] 19 kube-system pods found
	I0929 11:22:07.929900  295191 system_pods.go:89] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:22:07.929910  295191 system_pods.go:89] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:22:07.929918  295191 system_pods.go:89] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:22:07.929925  295191 system_pods.go:89] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:22:07.929930  295191 system_pods.go:89] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:07.929935  295191 system_pods.go:89] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:07.929941  295191 system_pods.go:89] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:07.929945  295191 system_pods.go:89] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:07.929953  295191 system_pods.go:89] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:22:07.929965  295191 system_pods.go:89] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:07.929969  295191 system_pods.go:89] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:07.929977  295191 system_pods.go:89] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:22:07.929989  295191 system_pods.go:89] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:22:07.930002  295191 system_pods.go:89] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:07.930012  295191 system_pods.go:89] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:22:07.930018  295191 system_pods.go:89] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:22:07.930023  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:22:07.930030  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:22:07.930042  295191 system_pods.go:89] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:22:07.930057  295191 retry.go:31] will retry after 604.920731ms: missing components: kube-dns
	I0929 11:22:08.019867  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:08.020038  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:08.298006  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:08.397996  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:08.516726  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:08.518216  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:08.542234  295191 system_pods.go:86] 19 kube-system pods found
	I0929 11:22:08.542314  295191 system_pods.go:89] "coredns-66bc5c9577-8jjxv" [c4109e68-4bdb-4409-b221-58c1cacbc20c] Running
	I0929 11:22:08.542340  295191 system_pods.go:89] "csi-hostpath-attacher-0" [4b09dc09-5e44-4cfd-8dc4-079eeb0fbf91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:22:08.542384  295191 system_pods.go:89] "csi-hostpath-resizer-0" [55031bba-b46b-47c6-8856-4e53f4c4da8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:22:08.542412  295191 system_pods.go:89] "csi-hostpathplugin-6xdd6" [465b0bf5-2d19-419e-97c1-c64657fd0841] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:22:08.542438  295191 system_pods.go:89] "etcd-addons-571100" [de7aac11-d9d7-4257-ae2c-3db739e430d6] Running
	I0929 11:22:08.542471  295191 system_pods.go:89] "kindnet-4wwqc" [1b323b38-59c2-4655-b51d-78bfe7733f32] Running
	I0929 11:22:08.542497  295191 system_pods.go:89] "kube-apiserver-addons-571100" [be282352-a910-40c7-9db7-2609a5295232] Running
	I0929 11:22:08.542519  295191 system_pods.go:89] "kube-controller-manager-addons-571100" [ff3ace80-566d-4c69-8f31-b40f52a638af] Running
	I0929 11:22:08.542556  295191 system_pods.go:89] "kube-ingress-dns-minikube" [f68f4c4d-0640-4eec-b000-0af9f7a49cb2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:22:08.542580  295191 system_pods.go:89] "kube-proxy-r2dq5" [5a20035a-442c-4e63-8ce5-f71015438a29] Running
	I0929 11:22:08.542602  295191 system_pods.go:89] "kube-scheduler-addons-571100" [2075c457-20ee-49d3-af4e-4f4601d02b10] Running
	I0929 11:22:08.542641  295191 system_pods.go:89] "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:22:08.542666  295191 system_pods.go:89] "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:22:08.542699  295191 system_pods.go:89] "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:22:08.542738  295191 system_pods.go:89] "registry-creds-764b6fb674-6xnxw" [a3f80ee5-24dc-4667-930f-2e3c13d9a6c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:22:08.542759  295191 system_pods.go:89] "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:22:08.542797  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kgdnm" [45f7b2ca-05a9-4941-90ac-b57aaccb9203] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:22:08.542822  295191 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qsvgm" [38b80812-818d-41b3-81e4-597c7b97a173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:22:08.542842  295191 system_pods.go:89] "storage-provisioner" [9e070620-f17e-4d54-8acd-35b1e73bc77c] Running
	I0929 11:22:08.542879  295191 system_pods.go:126] duration metric: took 1.714104002s to wait for k8s-apps to be running ...
	I0929 11:22:08.542907  295191 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:22:08.543018  295191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:22:08.570879  295191 system_svc.go:56] duration metric: took 27.956597ms WaitForService to wait for kubelet
	I0929 11:22:08.570955  295191 kubeadm.go:578] duration metric: took 44.557823829s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:22:08.571007  295191 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:22:08.574431  295191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 11:22:08.574511  295191 node_conditions.go:123] node cpu capacity is 2
	I0929 11:22:08.574539  295191 node_conditions.go:105] duration metric: took 3.490935ms to run NodePressure ...
	I0929 11:22:08.574566  295191 start.go:241] waiting for startup goroutines ...
	I0929 11:22:08.805769  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:08.895556  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:09.018739  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:09.019126  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:09.297511  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:09.394904  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:09.517907  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:09.518532  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:09.804579  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:09.898894  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:10.025412  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:10.026200  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:10.297860  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:10.398584  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:10.517033  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:10.518751  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:10.797649  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:10.895238  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:11.016348  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:11.018918  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:11.298243  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:11.394747  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:11.516771  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:11.518675  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:11.802171  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:11.894906  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:12.030298  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:12.032389  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:12.298338  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:12.395286  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:12.516928  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:12.518052  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:12.803135  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:12.894683  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:13.017781  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:13.019019  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:13.297783  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:13.396061  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:13.517582  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:13.519784  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:13.797555  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:13.894417  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:14.020703  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:14.020843  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:14.088183  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:22:14.297927  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:14.398878  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:14.518886  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:14.530554  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:14.797175  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:14.894700  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:22:14.961704  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:14.961733  295191 retry.go:31] will retry after 18.221856365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:15.017296  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:15.019739  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:15.298230  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:15.394523  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:15.516865  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:15.519088  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:15.798025  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:15.895872  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:16.019024  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:16.019374  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:16.298802  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:16.396389  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:16.518131  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:16.521274  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:16.805689  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:16.894775  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:17.018211  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:17.018557  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:17.299313  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:17.394983  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:17.519138  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:17.520307  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:17.808803  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:17.895208  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:18.018795  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:18.020527  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:18.298431  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:18.395584  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:18.518466  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:18.520101  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:18.804085  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:18.910382  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:19.017570  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:19.018840  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:19.297222  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:19.394159  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:19.515978  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:19.517916  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:19.804996  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:19.905265  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:20.019758  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:20.022121  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:20.298596  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:20.396049  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:20.517048  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:20.519201  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:20.798301  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:20.894161  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:21.018799  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:21.018964  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:21.298068  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:21.394492  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:21.520686  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:21.521914  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:21.806617  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:21.895220  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:22.021177  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:22.021602  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:22.303864  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:22.421988  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:22.523686  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:22.524234  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:22.833394  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:22.958615  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:23.023889  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:23.027534  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:23.298411  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:23.395660  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:23.519192  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:23.519634  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:23.834001  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:23.900580  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:24.018472  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:24.019113  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:24.298008  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:24.396726  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:24.521356  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:24.521800  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:24.812560  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:24.894965  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:25.017255  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:25.019573  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:25.297803  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:25.394602  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:25.519696  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:25.520077  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:25.818157  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:25.895973  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:26.019765  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:26.020715  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:26.296924  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:26.394737  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:26.516897  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:26.517875  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:26.802945  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:26.895063  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:27.016581  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:27.019131  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:27.297650  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:27.394665  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:27.519080  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:27.519531  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:27.806151  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:27.896178  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:28.023257  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:28.023925  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:28.298430  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:28.395219  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:28.518655  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:28.520547  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:28.801162  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:28.894376  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:29.018611  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:29.018810  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:29.297048  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:29.396065  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:29.517451  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:29.520225  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:29.814306  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:29.906391  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:30.020463  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:30.020705  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:30.302469  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:30.398660  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:30.517957  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:30.519911  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:30.834663  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:30.894697  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:31.020176  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:31.020636  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:31.297842  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:31.395005  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:31.524121  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:31.524422  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:31.800738  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:31.895329  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:32.018216  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:32.019999  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:32.298354  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:32.398149  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:32.518114  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:32.518746  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:32.801715  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:32.895251  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:33.017419  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:33.020984  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:33.184311  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:22:33.302460  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:33.396336  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:33.516478  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:33.518698  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:33.820883  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:33.907005  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:34.018872  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:34.021651  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:34.240389  295191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.056037885s)
	W0929 11:22:34.240476  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:34.240502  295191 retry.go:31] will retry after 30.430302348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:22:34.301617  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:34.394741  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:34.526949  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:34.527383  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:34.803812  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:34.896124  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:35.016763  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:35.019565  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:35.299273  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:35.399973  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:35.518739  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:35.518949  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:35.802768  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:35.895053  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:36.023208  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:36.023359  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:36.298278  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:36.398142  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:36.516556  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:36.518692  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:36.798139  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:36.897405  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:37.017325  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:37.018204  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:37.297517  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:37.395293  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:37.517184  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:37.517985  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:37.801347  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:37.894643  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:38.018007  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:38.018504  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:38.297562  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:38.395217  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:38.517288  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:38.518255  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:38.804267  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:38.895175  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:39.016973  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:39.019089  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:39.300596  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:39.421780  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:39.519324  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:39.519390  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:39.802732  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:39.894354  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:40.016780  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:40.018772  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:40.297056  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:40.395120  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:40.516194  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:40.519077  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:40.798016  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:40.894610  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:41.016635  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:41.018757  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:22:41.297224  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:41.394762  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:41.516884  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:41.519132  295191 kapi.go:107] duration metric: took 1m11.504108893s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:22:41.797838  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:41.894522  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:42.016770  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:42.301135  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:42.396039  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:42.517060  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:42.815578  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:42.895033  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:43.016576  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:43.298129  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:43.395223  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:43.517040  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:43.804617  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:43.895000  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:44.017654  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:44.297290  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:44.395358  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:44.517143  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:44.810750  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:44.894840  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:45.017550  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:45.299282  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:45.419311  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:45.517360  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:45.802769  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:45.895179  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:46.016857  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:46.298304  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:46.397096  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:46.516488  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:46.798185  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:46.901172  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:47.016181  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:47.298126  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:47.394323  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:47.516783  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:47.800577  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:47.894712  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:48.017801  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:48.298381  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:48.395627  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:48.517628  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:48.810705  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:48.895583  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:49.017663  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:49.301226  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:49.403918  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:49.520202  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:49.800703  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:49.894945  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:50.077533  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:50.298604  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:50.394889  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:50.518043  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:50.801915  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:50.895276  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:51.024126  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:51.297687  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:51.400835  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:51.517146  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:51.798045  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:51.894899  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:52.020588  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:52.298404  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:52.394280  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:52.516436  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:52.806985  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:52.894967  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:53.017574  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:53.299237  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:53.408994  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:53.517676  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:53.858279  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:53.894943  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:54.017427  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:54.298270  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:54.396832  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:54.524812  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:54.811071  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:54.894462  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:55.018300  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:55.298499  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:55.395679  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:55.518915  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:55.799812  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:55.894810  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:56.017634  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:56.298687  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:56.399370  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:56.516476  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:56.801894  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:56.895109  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:57.017238  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:57.297607  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:57.394864  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:57.517924  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:57.803334  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:57.894610  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:58.017149  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:58.298386  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:58.398527  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:58.517293  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:58.798863  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:58.894674  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:59.017166  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:59.297757  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:59.394574  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:22:59.518507  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:22:59.817815  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:22:59.894662  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:00.017109  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:00.299602  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:00.395373  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:00.517652  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:00.811450  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:00.896214  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:01.018187  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:01.297767  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:01.397021  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:01.516929  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:01.797955  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:01.894955  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:02.015941  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:02.298917  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:02.394886  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:02.517131  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:02.799030  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:02.894971  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:03.018486  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:03.298595  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:03.395702  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:03.517735  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:03.798164  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:03.894025  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:04.016260  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:04.297960  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:04.411885  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:04.517257  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:04.671646  295191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:04.808062  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:04.900966  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:05.021563  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:05.300097  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:05.397054  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:05.517086  295191 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:23:05.660435  295191 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 11:23:05.660528  295191 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:23:05.801349  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:05.895803  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:06.017210  295191 kapi.go:107] duration metric: took 1m36.004052513s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:23:06.297619  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:06.394078  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:06.814317  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:06.894956  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:07.300005  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:07.395036  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:07.826769  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:07.895585  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:08.298877  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:08.394754  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:08.801209  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:08.894653  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:09.297505  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:09.397687  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:09.799078  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:09.896497  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:10.298795  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:10.396192  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:10.810333  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:10.902361  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:11.297473  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:11.394499  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:11.798578  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:11.894472  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:12.303071  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:12.394307  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:12.815588  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:12.894448  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:13.297904  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:13.394850  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:13.797276  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:13.906137  295191 kapi.go:107] duration metric: took 1m38.015005663s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:23:13.910457  295191 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-571100 cluster.
	I0929 11:23:13.913335  295191 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:23:13.916268  295191 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:23:14.298116  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:14.822024  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:15.297802  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:15.806063  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:16.299328  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:16.805604  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:17.298877  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:17.797804  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:18.299081  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:18.799662  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:19.297615  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:19.797664  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:20.299742  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:20.798017  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:21.297648  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:21.799187  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:22.300230  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:22.802725  295191 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:23.297403  295191 kapi.go:107] duration metric: took 1m53.003466089s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:23:23.300365  295191 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:23:23.303103  295191 addons.go:514] duration metric: took 1m59.289581154s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:23:23.303153  295191 start.go:246] waiting for cluster config update ...
	I0929 11:23:23.303173  295191 start.go:255] writing updated cluster config ...
	I0929 11:23:23.303504  295191 ssh_runner.go:195] Run: rm -f paused
	I0929 11:23:23.306854  295191 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:23:23.310326  295191 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8jjxv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.316374  295191 pod_ready.go:94] pod "coredns-66bc5c9577-8jjxv" is "Ready"
	I0929 11:23:23.316402  295191 pod_ready.go:86] duration metric: took 6.047393ms for pod "coredns-66bc5c9577-8jjxv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.318727  295191 pod_ready.go:83] waiting for pod "etcd-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.322599  295191 pod_ready.go:94] pod "etcd-addons-571100" is "Ready"
	I0929 11:23:23.322626  295191 pod_ready.go:86] duration metric: took 3.877702ms for pod "etcd-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.325090  295191 pod_ready.go:83] waiting for pod "kube-apiserver-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.329422  295191 pod_ready.go:94] pod "kube-apiserver-addons-571100" is "Ready"
	I0929 11:23:23.329451  295191 pod_ready.go:86] duration metric: took 4.337617ms for pod "kube-apiserver-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.331534  295191 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.710406  295191 pod_ready.go:94] pod "kube-controller-manager-addons-571100" is "Ready"
	I0929 11:23:23.710438  295191 pod_ready.go:86] duration metric: took 378.882084ms for pod "kube-controller-manager-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:23.910127  295191 pod_ready.go:83] waiting for pod "kube-proxy-r2dq5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:24.311026  295191 pod_ready.go:94] pod "kube-proxy-r2dq5" is "Ready"
	I0929 11:23:24.311053  295191 pod_ready.go:86] duration metric: took 400.899323ms for pod "kube-proxy-r2dq5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:24.511502  295191 pod_ready.go:83] waiting for pod "kube-scheduler-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:24.913377  295191 pod_ready.go:94] pod "kube-scheduler-addons-571100" is "Ready"
	I0929 11:23:24.913412  295191 pod_ready.go:86] duration metric: took 401.882717ms for pod "kube-scheduler-addons-571100" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:23:24.913425  295191 pod_ready.go:40] duration metric: took 1.606539123s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:23:24.972946  295191 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 11:23:24.976627  295191 out.go:179] * Done! kubectl is now configured to use "addons-571100" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:26:19 addons-571100 crio[991]: time="2025-09-29 11:26:19.321904391Z" level=info msg="Removed pod sandbox: 370f44b337d2c3a2526f0d5bd92afb715a3cfebea73146a7fb36906cd753e56c" id=201d5d6d-af68-4b6f-b164-521da9833d3c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.446821097Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-jknqj/POD" id=1c38c917-b2a1-4b25-87a7-11c5d3113bdd name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.446881390Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.492590564Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jknqj Namespace:default ID:589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f UID:71a093c5-231f-44c9-a6fc-51d5fbfc648d NetNS:/var/run/netns/01fc5500-a55a-4916-aed1-92836ee7c58f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.492632431Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-jknqj to CNI network \"kindnet\" (type=ptp)"
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.503133128Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jknqj Namespace:default ID:589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f UID:71a093c5-231f-44c9-a6fc-51d5fbfc648d NetNS:/var/run/netns/01fc5500-a55a-4916-aed1-92836ee7c58f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.504021170Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-jknqj for CNI network kindnet (type=ptp)"
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.507392836Z" level=info msg="Ran pod sandbox 589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f with infra container: default/hello-world-app-5d498dc89-jknqj/POD" id=1c38c917-b2a1-4b25-87a7-11c5d3113bdd name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.508824979Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e89e3b91-aab0-460e-9faf-a1387c75ef1e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.509032056Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e89e3b91-aab0-460e-9faf-a1387c75ef1e name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.509713514Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6b2cba1a-4e0c-47a5-b614-38f2e00f1614 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.512366466Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 11:27:29 addons-571100 crio[991]: time="2025-09-29 11:27:29.740821145Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.444102640Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=6b2cba1a-4e0c-47a5-b614-38f2e00f1614 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.444673307Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f67911ea-9b4d-4fe1-a012-2d7a6ccdb965 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.445801735Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f67911ea-9b4d-4fe1-a012-2d7a6ccdb965 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.448199042Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=671a4815-d60d-454c-a0ee-485adc3e0795 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.448853803Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=671a4815-d60d-454c-a0ee-485adc3e0795 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.454473744Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-jknqj/hello-world-app" id=a4c6dc1a-d79d-4ded-9c54-3c1de8cdb30f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.454578972Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.485197013Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/71a944f1d1cd33265be7a24129efe70c7cf5b46c7baf8c3d2a9b35ce3b6575d6/merged/etc/passwd: no such file or directory"
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.485242408Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/71a944f1d1cd33265be7a24129efe70c7cf5b46c7baf8c3d2a9b35ce3b6575d6/merged/etc/group: no such file or directory"
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.556000399Z" level=info msg="Created container 5d8989c3a3f709f90a2abba9954fdf781da1446350d2904b7d25030b2b728a9e: default/hello-world-app-5d498dc89-jknqj/hello-world-app" id=a4c6dc1a-d79d-4ded-9c54-3c1de8cdb30f name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.558688753Z" level=info msg="Starting container: 5d8989c3a3f709f90a2abba9954fdf781da1446350d2904b7d25030b2b728a9e" id=a2197fa5-d210-4c44-a53a-975a79440008 name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 11:27:30 addons-571100 crio[991]: time="2025-09-29 11:27:30.573069523Z" level=info msg="Started container" PID=9922 containerID=5d8989c3a3f709f90a2abba9954fdf781da1446350d2904b7d25030b2b728a9e description=default/hello-world-app-5d498dc89-jknqj/hello-world-app id=a2197fa5-d210-4c44-a53a-975a79440008 name=/runtime.v1.RuntimeService/StartContainer sandboxID=589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	5d8989c3a3f70       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   589c13264110c       hello-world-app-5d498dc89-jknqj
	dc936c9c2434d       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   38613f82034e1       nginx
	7198d32f65064       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   1e4af83e78ee1       busybox
	645add8975e5d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago            Running             gadget                    0                   fe0b4cf3a99a6       gadget-qlp49
	61a40256fb325       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   0d420044a2064       ingress-nginx-controller-9cc49f96f-hsz9n
	6db2e848690d7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              patch                     0                   56feb8ca5f0a8       ingress-nginx-admission-patch-dpq67
	e3e8270c62646       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               5 minutes ago            Running             minikube-ingress-dns      0                   b1c03e0641902       kube-ingress-dns-minikube
	359b932f3b96e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   5 minutes ago            Exited              create                    0                   014770876c960       ingress-nginx-admission-create-dl5qf
	040a14ba6ce00       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   c59848fe3b369       storage-provisioner
	b280b2f61def9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   b3cf305b7abd8       coredns-66bc5c9577-8jjxv
	b20e8009c6ac7       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             6 minutes ago            Running             kube-proxy                0                   fedd1fdffd64a       kube-proxy-r2dq5
	317cc76ed1dac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   9d7b79482f4a5       kindnet-4wwqc
	1983cc481f378       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             6 minutes ago            Running             kube-scheduler            0                   841c27c575380       kube-scheduler-addons-571100
	92abebd4d1d8d       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             6 minutes ago            Running             kube-controller-manager   0                   3dd124e068aba       kube-controller-manager-addons-571100
	a2e131429284c       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             6 minutes ago            Running             kube-apiserver            0                   7c42324b556ba       kube-apiserver-addons-571100
	700a3327822d2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   a8cfeb856312b       etcd-addons-571100
	
	
	==> coredns [b280b2f61def922f3cb1e67ecc5d6fe8cff5633e2172f0806d9f0715d5ddd315] <==
	[INFO] 10.244.0.14:51534 - 25256 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001554874s
	[INFO] 10.244.0.14:51534 - 39415 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000538347s
	[INFO] 10.244.0.14:51534 - 13374 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000084561s
	[INFO] 10.244.0.14:41119 - 29569 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137106s
	[INFO] 10.244.0.14:41119 - 29339 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070268s
	[INFO] 10.244.0.14:33065 - 57134 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078284s
	[INFO] 10.244.0.14:33065 - 56913 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059839s
	[INFO] 10.244.0.14:32988 - 53994 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072762s
	[INFO] 10.244.0.14:32988 - 53532 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059101s
	[INFO] 10.244.0.14:45060 - 23266 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001154124s
	[INFO] 10.244.0.14:45060 - 22788 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001131003s
	[INFO] 10.244.0.14:55777 - 61436 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096893s
	[INFO] 10.244.0.14:55777 - 61008 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066576s
	[INFO] 10.244.0.21:54644 - 7091 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213986s
	[INFO] 10.244.0.21:50579 - 12583 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205093s
	[INFO] 10.244.0.21:54546 - 10210 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096548s
	[INFO] 10.244.0.21:41599 - 62876 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118349s
	[INFO] 10.244.0.21:44247 - 61075 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000369283s
	[INFO] 10.244.0.21:59863 - 60437 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168752s
	[INFO] 10.244.0.21:55809 - 1776 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002202978s
	[INFO] 10.244.0.21:59096 - 45707 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001897613s
	[INFO] 10.244.0.21:36717 - 56101 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001802378s
	[INFO] 10.244.0.21:58113 - 31231 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002431225s
	[INFO] 10.244.0.23:45812 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000270617s
	[INFO] 10.244.0.23:34845 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193186s
	
	
	==> describe nodes <==
	Name:               addons-571100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-571100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-571100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_21_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-571100
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:21:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-571100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:27:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:25:23 +0000   Mon, 29 Sep 2025 11:21:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:25:23 +0000   Mon, 29 Sep 2025 11:21:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:25:23 +0000   Mon, 29 Sep 2025 11:21:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:25:23 +0000   Mon, 29 Sep 2025 11:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-571100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbc560ea56854836a107a9d7c8b75769
	  System UUID:                673ee4ba-cec0-45a4-8ad4-9c79da5040d7
	  Boot ID:                    3ea59072-b9ed-4996-bd90-d451fda04a88
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  default                     hello-world-app-5d498dc89-jknqj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-qlp49                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-hsz9n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-8jjxv                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m7s
	  kube-system                 etcd-addons-571100                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m12s
	  kube-system                 kindnet-4wwqc                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m8s
	  kube-system                 kube-apiserver-addons-571100                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-controller-manager-addons-571100       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-proxy-r2dq5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-scheduler-addons-571100                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m1s   kube-proxy       
	  Normal   Starting                 6m13s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m13s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m13s  kubelet          Node addons-571100 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m13s  kubelet          Node addons-571100 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m13s  kubelet          Node addons-571100 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m9s   node-controller  Node addons-571100 event: Registered Node addons-571100 in Controller
	  Normal   NodeReady                5m25s  kubelet          Node addons-571100 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.515134] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.790647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.751861] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:36] hrtimer: interrupt took 21542036 ns
	[Sep29 11:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [700a3327822d21c4d5ac83b9e5f61587d1de68180e36c595e608d71fdc57579b] <==
	{"level":"warn","ts":"2025-09-29T11:21:14.938505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:14.964773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:14.976769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:14.996035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.029323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.035131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.056363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.113526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.126818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.149398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.178550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.196857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.218373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:15.286743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:24.726519Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.292023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:21:24.732361Z","caller":"traceutil/trace.go:172","msg":"trace[216250569] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:351; }","duration":"127.142393ms","start":"2025-09-29T11:21:24.605198Z","end":"2025-09-29T11:21:24.732340Z","steps":["trace[216250569] 'range keys from in-memory index tree'  (duration: 119.235179ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:21:25.904731Z","caller":"traceutil/trace.go:172","msg":"trace[919807316] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"108.548824ms","start":"2025-09-29T11:21:25.796167Z","end":"2025-09-29T11:21:25.904716Z","steps":["trace[919807316] 'process raft request'  (duration: 107.801414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:21:26.935878Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.826039ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040293607751276 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:302 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:128 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T11:21:26.935964Z","caller":"traceutil/trace.go:172","msg":"trace[1612861538] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"147.182463ms","start":"2025-09-29T11:21:26.788770Z","end":"2025-09-29T11:21:26.935953Z","steps":["trace[1612861538] 'process raft request'  (duration: 26.899299ms)","trace[1612861538] 'compare'  (duration: 119.734602ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:21:30.864658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:30.990743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:52.962356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:52.980600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:52.999369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:21:53.014479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36310","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:27:31 up  1:10,  0 users,  load average: 0.41, 1.61, 2.61
	Linux addons-571100 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [317cc76ed1dac0f9b19881b1f7fd596410bd957065c9432af73eda070d7cedaf] <==
	I0929 11:25:26.333468       1 main.go:301] handling current node
	I0929 11:25:36.333634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:25:36.333670       1 main.go:301] handling current node
	I0929 11:25:46.341695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:25:46.341745       1 main.go:301] handling current node
	I0929 11:25:56.340374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:25:56.340406       1 main.go:301] handling current node
	I0929 11:26:06.340363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:06.340401       1 main.go:301] handling current node
	I0929 11:26:16.339245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:16.339277       1 main.go:301] handling current node
	I0929 11:26:26.336088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:26.336233       1 main.go:301] handling current node
	I0929 11:26:36.338859       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:36.338895       1 main.go:301] handling current node
	I0929 11:26:46.340360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:46.340396       1 main.go:301] handling current node
	I0929 11:26:56.342422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:56.342455       1 main.go:301] handling current node
	I0929 11:27:06.339235       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:06.339270       1 main.go:301] handling current node
	I0929 11:27:16.340358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:16.340391       1 main.go:301] handling current node
	I0929 11:27:26.340409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:26.340522       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2e131429284c4e6f51afdc453e54ae0cac33f7de836f2f1b6db7cd250bfb1b0] <==
	I0929 11:24:11.414433       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.104.48"}
	E0929 11:24:30.157208       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0929 11:24:38.147653       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:50.824448       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 11:24:59.004532       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:25:09.939568       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:25:10.261663       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.205.58"}
	I0929 11:25:17.691660       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:25:17.692363       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:25:17.723654       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:25:17.723784       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:25:17.738882       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:25:17.739005       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:25:17.752066       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:25:17.752119       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:25:17.792456       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:25:17.792564       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 11:25:18.742297       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 11:25:18.793028       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 11:25:18.911503       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 11:25:23.859475       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:25:44.802796       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:26:22.070531       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:27:11.331279       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:27:29.321389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.169.235"}
	
	
	==> kube-controller-manager [92abebd4d1d8d597463dad32e9fd5c879072ca5324e40eaf936df45ead1210a0] <==
	E0929 11:25:26.702776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:27.126171       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:27.127171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:33.183455       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:33.184550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:33.298568       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:33.299525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:37.315436       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:37.316403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:48.807063       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:48.807977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:49.506327       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:49.507403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:25:59.218617       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:25:59.220837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:24.720633       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:24.721700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:34.183144       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:34.184200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:46.312756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:46.313772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:54.836661       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:54.837633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:07.068060       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:07.069174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b20e8009c6ac7992726c4befb1976dc0fa8a33a81c550b29e49a582f590e8c39] <==
	I0929 11:21:28.114026       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:21:28.765446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:21:29.108401       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:21:29.114712       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:21:29.114972       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:21:29.550983       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:21:29.551041       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:21:29.621054       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:21:29.621352       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:21:29.621375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:21:29.734679       1 config.go:200] "Starting service config controller"
	I0929 11:21:29.734765       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:21:29.734815       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:21:29.734843       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:21:29.734895       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:21:29.734923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:21:29.735833       1 config.go:309] "Starting node config controller"
	I0929 11:21:29.735929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:21:29.735961       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:21:29.836965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:21:29.844657       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:21:29.844702       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1983cc481f3783e301164608e81ea5d059297591bf149227e10dc25e49fae83e] <==
	I0929 11:21:17.158746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:21:17.162284       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:21:17.162417       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:21:17.162841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:21:17.163050       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 11:21:17.171977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:21:17.172140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:21:17.172450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:21:17.172536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 11:21:17.177474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:21:17.177708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:21:17.177850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:21:17.177958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:21:17.178083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:21:17.178191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:21:17.178282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:21:17.178388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:21:17.178491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:21:17.178599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:21:17.178700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:21:17.178814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:21:17.178917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:21:17.179093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:21:17.179253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0929 11:21:18.462850       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.897782    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a98e49bcbfc5cfd45a4b2ddba2be7941b096787999bb08f02e8c8e86fb3c07b9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a98e49bcbfc5cfd45a4b2ddba2be7941b096787999bb08f02e8c8e86fb3c07b9/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.907531    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d9277b737bc3c292991c5bd7ad0086b3b6b4dd0bb7d3c10c988de4a1771c5f59/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d9277b737bc3c292991c5bd7ad0086b3b6b4dd0bb7d3c10c988de4a1771c5f59/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.907602    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f58b2e6fcd4f5f81c369a72ced8b3694601ac3a8c033fc3fead2dab8bc7fd7c9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f58b2e6fcd4f5f81c369a72ced8b3694601ac3a8c033fc3fead2dab8bc7fd7c9/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.913163    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/808c739fdb1e27824f8c3178b4746a58010a5af3c5483b74952b2335450ac2a3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/808c739fdb1e27824f8c3178b4746a58010a5af3c5483b74952b2335450ac2a3/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.916825    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2d855081d7ee52513288835b6dfc6c15b2041dd1189158240e46ba55b9612466/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2d855081d7ee52513288835b6dfc6c15b2041dd1189158240e46ba55b9612466/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.941368    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d9277b737bc3c292991c5bd7ad0086b3b6b4dd0bb7d3c10c988de4a1771c5f59/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d9277b737bc3c292991c5bd7ad0086b3b6b4dd0bb7d3c10c988de4a1771c5f59/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.947845    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3b0f9ad89ea29f3df157c5fb016bb3cffc9f652ba5d69616573ab5c2edcdbf1e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3b0f9ad89ea29f3df157c5fb016bb3cffc9f652ba5d69616573ab5c2edcdbf1e/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.955235    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cd009d58c82078b6a8c2c447e03697e76ce5e9c8889e09813c915d0db68bef1c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cd009d58c82078b6a8c2c447e03697e76ce5e9c8889e09813c915d0db68bef1c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.958465    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/91897d468bda8263004e750d682f8d70cdddab9e4102aca5e4f56ff72ee76516/diff" to get inode usage: stat /var/lib/containers/storage/overlay/91897d468bda8263004e750d682f8d70cdddab9e4102aca5e4f56ff72ee76516/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.959798    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a1faef7e7c4987f45b41450ab542980306246dd17cd5d637238cdd4120df6329/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a1faef7e7c4987f45b41450ab542980306246dd17cd5d637238cdd4120df6329/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.962980    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cd009d58c82078b6a8c2c447e03697e76ce5e9c8889e09813c915d0db68bef1c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cd009d58c82078b6a8c2c447e03697e76ce5e9c8889e09813c915d0db68bef1c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.974697    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6b1beb0b2c466263408ccc73a0d0c92d4e3d65460ac1956097d1558f4a60d8ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6b1beb0b2c466263408ccc73a0d0c92d4e3d65460ac1956097d1558f4a60d8ff/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.978855    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/91897d468bda8263004e750d682f8d70cdddab9e4102aca5e4f56ff72ee76516/diff" to get inode usage: stat /var/lib/containers/storage/overlay/91897d468bda8263004e750d682f8d70cdddab9e4102aca5e4f56ff72ee76516/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.988454    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/903d2553ff7b7bd6af8ce16705cabf9e2d897cbf41985af4f58204d60774177a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/903d2553ff7b7bd6af8ce16705cabf9e2d897cbf41985af4f58204d60774177a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.988498    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3055ec1fea9476397eb8ec381b562fbe92219d79a6406ce3331b96f4b089d0bd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3055ec1fea9476397eb8ec381b562fbe92219d79a6406ce3331b96f4b089d0bd/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.988523    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f5043630b3c25490583b5e6c7e13d38a3f5f1e763265d469a505d8a5cc1a2ada/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f5043630b3c25490583b5e6c7e13d38a3f5f1e763265d469a505d8a5cc1a2ada/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.995589    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/808c739fdb1e27824f8c3178b4746a58010a5af3c5483b74952b2335450ac2a3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/808c739fdb1e27824f8c3178b4746a58010a5af3c5483b74952b2335450ac2a3/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.995630    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6b1beb0b2c466263408ccc73a0d0c92d4e3d65460ac1956097d1558f4a60d8ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6b1beb0b2c466263408ccc73a0d0c92d4e3d65460ac1956097d1558f4a60d8ff/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.995866    1545 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2d855081d7ee52513288835b6dfc6c15b2041dd1189158240e46ba55b9612466/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2d855081d7ee52513288835b6dfc6c15b2041dd1189158240e46ba55b9612466/diff: no such file or directory, extraDiskErr: <nil>
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.997982    1545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145238997743883 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 11:27:18 addons-571100 kubelet[1545]: E0929 11:27:18.998014    1545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145238997743883 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 11:27:29 addons-571100 kubelet[1545]: E0929 11:27:29.001023    1545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145249000777635 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 11:27:29 addons-571100 kubelet[1545]: E0929 11:27:29.001055    1545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145249000777635 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 29 11:27:29 addons-571100 kubelet[1545]: I0929 11:27:29.205434    1545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sk8p\" (UniqueName: \"kubernetes.io/projected/71a093c5-231f-44c9-a6fc-51d5fbfc648d-kube-api-access-4sk8p\") pod \"hello-world-app-5d498dc89-jknqj\" (UID: \"71a093c5-231f-44c9-a6fc-51d5fbfc648d\") " pod="default/hello-world-app-5d498dc89-jknqj"
	Sep 29 11:27:29 addons-571100 kubelet[1545]: W0929 11:27:29.505937    1545 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1bbe4232e53007db479dfe37e29983fa94b72c8386427477956abb9d7fca4814/crio-589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f WatchSource:0}: Error finding container 589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f: Status 404 returned error can't find the container with id 589c13264110cafd18287579b6f82e0fa1ac1b200fe1487ea58189225e9aa98f
	
	
	==> storage-provisioner [040a14ba6ce000adf844f908947c0be75bffda528f309ad55f245d46fb980707] <==
	W0929 11:27:07.540047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:09.543983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:09.550954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:11.553550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:11.557881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:13.561679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:13.567901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:15.570765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:15.575115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:17.578117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:17.584993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:19.588339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:19.592679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:21.595669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:21.602384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:23.606753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:23.613323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:25.616160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:25.622946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:27.626097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:27.630223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:29.633993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:29.641989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:31.644776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:27:31.650835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-571100 -n addons-571100
helpers_test.go:269: (dbg) Run:  kubectl --context addons-571100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-dl5qf ingress-nginx-admission-patch-dpq67
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-571100 describe pod ingress-nginx-admission-create-dl5qf ingress-nginx-admission-patch-dpq67
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-571100 describe pod ingress-nginx-admission-create-dl5qf ingress-nginx-admission-patch-dpq67: exit status 1 (85.600318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dl5qf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dpq67" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-571100 describe pod ingress-nginx-admission-create-dl5qf ingress-nginx-admission-patch-dpq67: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable ingress-dns --alsologtostderr -v=1: (1.613031206s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable ingress --alsologtostderr -v=1: (7.7757648s)
--- FAIL: TestAddons/parallel/Ingress (152.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-686485 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-686485 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-686485 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-686485 --alsologtostderr -v=1] stderr:
I0929 11:45:54.044420  324409 out.go:360] Setting OutFile to fd 1 ...
I0929 11:45:54.045229  324409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:45:54.045243  324409 out.go:374] Setting ErrFile to fd 2...
I0929 11:45:54.045249  324409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:45:54.045534  324409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:45:54.045819  324409 mustload.go:65] Loading cluster: functional-686485
I0929 11:45:54.046223  324409 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:45:54.046680  324409 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:45:54.065567  324409 host.go:66] Checking if "functional-686485" exists ...
I0929 11:45:54.065908  324409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 11:45:54.123507  324409 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:45:54.113504957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0929 11:45:54.123622  324409 api_server.go:166] Checking apiserver status ...
I0929 11:45:54.123685  324409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 11:45:54.123735  324409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:45:54.140859  324409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:45:54.246749  324409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4582/cgroup
I0929 11:45:54.255856  324409 api_server.go:182] apiserver freezer: "10:freezer:/docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio/crio-7e02997c4a169c16aa505782e2302d588dd8856611e6e53513deba2f5708373a"
I0929 11:45:54.255931  324409 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio/crio-7e02997c4a169c16aa505782e2302d588dd8856611e6e53513deba2f5708373a/freezer.state
I0929 11:45:54.264426  324409 api_server.go:204] freezer state: "THAWED"
I0929 11:45:54.264464  324409 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 11:45:54.273781  324409 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 11:45:54.273843  324409 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 11:45:54.274051  324409 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:45:54.274076  324409 addons.go:69] Setting dashboard=true in profile "functional-686485"
I0929 11:45:54.274086  324409 addons.go:238] Setting addon dashboard=true in "functional-686485"
I0929 11:45:54.274112  324409 host.go:66] Checking if "functional-686485" exists ...
I0929 11:45:54.274505  324409 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:45:54.295694  324409 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 11:45:54.298500  324409 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 11:45:54.301317  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 11:45:54.301355  324409 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 11:45:54.301436  324409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:45:54.319121  324409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:45:54.426265  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 11:45:54.426314  324409 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 11:45:54.444674  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 11:45:54.444725  324409 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 11:45:54.463452  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 11:45:54.463497  324409 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 11:45:54.481940  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 11:45:54.481964  324409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 11:45:54.499083  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 11:45:54.499109  324409 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 11:45:54.516928  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 11:45:54.516950  324409 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 11:45:54.534778  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 11:45:54.534800  324409 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 11:45:54.551970  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 11:45:54.551993  324409 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 11:45:54.569455  324409 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:45:54.569516  324409 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 11:45:54.594090  324409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:45:55.346828  324409 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-686485 addons enable metrics-server

                                                
                                                
I0929 11:45:55.349773  324409 addons.go:201] Writing out "functional-686485" config to set dashboard=true...
W0929 11:45:55.350051  324409 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 11:45:55.350739  324409 kapi.go:59] client config for functional-686485: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt", KeyFile:"/home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.key", CAFile:"/home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 11:45:55.351270  324409 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 11:45:55.351288  324409 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 11:45:55.351294  324409 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 11:45:55.351303  324409 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 11:45:55.351308  324409 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 11:45:55.367627  324409 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  47dbe64a-92d9-4f0b-9dcd-382aae2629e5 1786 0 2025-09-29 11:45:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 11:45:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.134.189,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.134.189],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 11:45:55.367813  324409 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 11:45:55.367919  324409 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-686485 proxy --port 36195]
I0929 11:45:55.368244  324409 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 11:45:55.428808  324409 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 11:45:55.428860  324409 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 11:45:55.448897  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d44b0ac6-69cc-416f-a88c-c6edc9dfc9cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000390f00 TLS:<nil>}
I0929 11:45:55.448976  324409 retry.go:31] will retry after 66.644µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.452682  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0187ed1-c2d9-47d8-8613-40e0c5a73af0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000391040 TLS:<nil>}
I0929 11:45:55.452737  324409 retry.go:31] will retry after 207.958µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.457819  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59372959-d33b-420d-bcbd-03cdecd45f64] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000391180 TLS:<nil>}
I0929 11:45:55.457877  324409 retry.go:31] will retry after 265.912µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.462889  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[742562e5-c160-4d29-9db0-b19ae09805c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c02c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472000 TLS:<nil>}
I0929 11:45:55.462943  324409 retry.go:31] will retry after 304.745µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.472870  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a212f201-a7e5-4330-a584-e09fa7f93922] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472140 TLS:<nil>}
I0929 11:45:55.472928  324409 retry.go:31] will retry after 689.553µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.483923  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4caefe3a-dee6-4016-8db3-11b24705b8b0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472280 TLS:<nil>}
I0929 11:45:55.483996  324409 retry.go:31] will retry after 1.094993ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.488785  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[495047a0-5f8e-4a35-8e6a-efc64015da00] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004723c0 TLS:<nil>}
I0929 11:45:55.488853  324409 retry.go:31] will retry after 965.48µs: Temporary Error: unexpected response code: 503
I0929 11:45:55.492353  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9389acd5-230f-4be1-8355-ca7ace74b330] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472500 TLS:<nil>}
I0929 11:45:55.492406  324409 retry.go:31] will retry after 1.524036ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.497005  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbb6e1bc-e1fa-4501-8ff1-8ff0e6d3d7b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004728c0 TLS:<nil>}
I0929 11:45:55.497052  324409 retry.go:31] will retry after 1.541497ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.501688  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d1565566-cafd-4c47-90fc-986bf4924e1c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472a00 TLS:<nil>}
I0929 11:45:55.501768  324409 retry.go:31] will retry after 2.629771ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.507401  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a50ece7e-8312-4f8f-a25c-eb5fdf16e0c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003912c0 TLS:<nil>}
I0929 11:45:55.507453  324409 retry.go:31] will retry after 7.477978ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.519958  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9cb1de08-28be-47a1-a81d-307571cd95e8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472b40 TLS:<nil>}
I0929 11:45:55.520047  324409 retry.go:31] will retry after 10.693899ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.534445  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efaf0564-8e4e-45f6-9da7-fba1149338cd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c07c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472c80 TLS:<nil>}
I0929 11:45:55.534512  324409 retry.go:31] will retry after 9.907055ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.548221  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b450a193-ef31-42c2-994d-db2da9fb8e91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473040 TLS:<nil>}
I0929 11:45:55.548305  324409 retry.go:31] will retry after 18.46424ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.571523  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c970f2c-19f5-4f35-9fd2-e0a1d3a8b7bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c08c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473180 TLS:<nil>}
I0929 11:45:55.571582  324409 retry.go:31] will retry after 31.38302ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.611931  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[164b2702-01b8-4265-acda-a975cabf7bc6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004732c0 TLS:<nil>}
I0929 11:45:55.612007  324409 retry.go:31] will retry after 22.471702ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.638012  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5545d670-d849-4e47-8490-b9b75a78ffd2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473400 TLS:<nil>}
I0929 11:45:55.638078  324409 retry.go:31] will retry after 51.14081ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.693060  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[794a38cd-55f3-4018-90f6-b7a46c3337ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473540 TLS:<nil>}
I0929 11:45:55.693119  324409 retry.go:31] will retry after 106.760552ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.804379  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea33bcc8-49fd-496d-bb14-11e8e713d733] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x4000531b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473680 TLS:<nil>}
I0929 11:45:55.804442  324409 retry.go:31] will retry after 133.167314ms: Temporary Error: unexpected response code: 503
I0929 11:45:55.940359  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0735bfe-8950-4709-a52e-72a50997a0fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:55 GMT]] Body:0x40007c0b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004737c0 TLS:<nil>}
I0929 11:45:55.940425  324409 retry.go:31] will retry after 263.339345ms: Temporary Error: unexpected response code: 503
I0929 11:45:56.207764  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed274c0c-6f56-4c5a-a3b0-56fcabde450e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:56 GMT]] Body:0x4000531d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473900 TLS:<nil>}
I0929 11:45:56.207830  324409 retry.go:31] will retry after 187.638156ms: Temporary Error: unexpected response code: 503
I0929 11:45:56.399887  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8be550eb-beb9-489d-999b-14ff5f05b304] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:56 GMT]] Body:0x4000531dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000391400 TLS:<nil>}
I0929 11:45:56.399950  324409 retry.go:31] will retry after 435.349903ms: Temporary Error: unexpected response code: 503
I0929 11:45:56.838402  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d43d0648-4074-4f35-8256-de7fa27fa9e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:56 GMT]] Body:0x40007c0c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473a40 TLS:<nil>}
I0929 11:45:56.838483  324409 retry.go:31] will retry after 874.62049ms: Temporary Error: unexpected response code: 503
I0929 11:45:57.716155  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bba94c6a-6b68-4b89-a547-86df7479371d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:57 GMT]] Body:0x4000531f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000391540 TLS:<nil>}
I0929 11:45:57.716219  324409 retry.go:31] will retry after 587.454473ms: Temporary Error: unexpected response code: 503
I0929 11:45:58.306763  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a46cd088-3960-4ff9-8c43-90b3dcc39630] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:58 GMT]] Body:0x40007c0d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473b80 TLS:<nil>}
I0929 11:45:58.306846  324409 retry.go:31] will retry after 1.421096249s: Temporary Error: unexpected response code: 503
I0929 11:45:59.731356  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[973c9c2f-fc13-4cca-aa0d-e2152c3e687a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:45:59 GMT]] Body:0x40007c0f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003917c0 TLS:<nil>}
I0929 11:45:59.731419  324409 retry.go:31] will retry after 1.505480272s: Temporary Error: unexpected response code: 503
I0929 11:46:01.239956  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d430f9a3-b0dc-4c39-b837-5eec9ba80cb8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:46:01 GMT]] Body:0x400167c100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000391a40 TLS:<nil>}
I0929 11:46:01.240026  324409 retry.go:31] will retry after 4.195762301s: Temporary Error: unexpected response code: 503
I0929 11:46:05.440847  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be648a12-b0b4-4e9c-977a-3b17c2667e6a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:46:05 GMT]] Body:0x40007c1080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473cc0 TLS:<nil>}
I0929 11:46:05.440926  324409 retry.go:31] will retry after 3.599294118s: Temporary Error: unexpected response code: 503
I0929 11:46:09.043330  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b80ced5-07d7-4cd2-85c6-5891645782d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:46:09 GMT]] Body:0x400167c240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000473e00 TLS:<nil>}
I0929 11:46:09.043391  324409 retry.go:31] will retry after 5.302299851s: Temporary Error: unexpected response code: 503
I0929 11:46:14.348832  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b47246c4-b1c6-4f42-a7e7-d6317aecdc34] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:46:14 GMT]] Body:0x400167c300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2280 TLS:<nil>}
I0929 11:46:14.348908  324409 retry.go:31] will retry after 18.639769168s: Temporary Error: unexpected response code: 503
I0929 11:46:32.995483  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b1f65fda-4513-448d-a7a2-583501e25f20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:46:32 GMT]] Body:0x400167c3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b23c0 TLS:<nil>}
I0929 11:46:32.995544  324409 retry.go:31] will retry after 27.977751632s: Temporary Error: unexpected response code: 503
I0929 11:47:00.976272  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac0f9254-5cd1-4519-8be1-92de2b897764] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:47:00 GMT]] Body:0x400167c480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2500 TLS:<nil>}
I0929 11:47:00.976355  324409 retry.go:31] will retry after 33.925484815s: Temporary Error: unexpected response code: 503
I0929 11:47:34.908065  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4478d66a-aecf-4235-9df0-d49f40fd6477] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:47:34 GMT]] Body:0x40007c1300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2780 TLS:<nil>}
I0929 11:47:34.908133  324409 retry.go:31] will retry after 45.368501366s: Temporary Error: unexpected response code: 503
I0929 11:48:20.280539  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5abb5bf-de51-48f8-ae04-f4e1be0883ff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:48:20 GMT]] Body:0x400167c100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b28c0 TLS:<nil>}
I0929 11:48:20.280608  324409 retry.go:31] will retry after 59.459590548s: Temporary Error: unexpected response code: 503
I0929 11:49:19.743703  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6834654f-35a3-4da9-b79a-84658e32dea1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:49:19 GMT]] Body:0x40007c0200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2c80 TLS:<nil>}
I0929 11:49:19.743771  324409 retry.go:31] will retry after 1m24.159207591s: Temporary Error: unexpected response code: 503
I0929 11:50:43.907094  324409 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5158895b-13b7-411a-b452-c98137b5717e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:50:43 GMT]] Body:0x40007c0240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2dc0 TLS:<nil>}
I0929 11:50:43.907163  324409 retry.go:31] will retry after 1m18.499585473s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-686485
helpers_test.go:243: (dbg) docker inspect functional-686485:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	        "Created": "2025-09-29T11:28:49.947415423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:28:50.027029754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hosts",
	        "LogPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7-json.log",
	        "Name": "/functional-686485",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-686485:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-686485",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	                "LowerDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573-init/diff:/var/lib/docker/overlay2/83e06d49de89e61a1046432dce270924281d24e14aa4bd929fb6d16b3962f5cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-686485",
	                "Source": "/var/lib/docker/volumes/functional-686485/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-686485",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-686485",
	                "name.minikube.sigs.k8s.io": "functional-686485",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba7e2032b4c7c1852078cf95c118e09db7ba68bd9e71a188e5c4248100ffad60",
	            "SandboxKey": "/var/run/docker/netns/ba7e2032b4c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-686485": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:a8:81:2a:9e:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4625b3a7a589d85dc9c16fe0c52282f454d62ce28670a509256f159d69a12956",
	                    "EndpointID": "acc054f48a40506f78db053c88ca5833fc530f5b8f98803782cea0419d707da1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-686485",
	                        "94cef4d5f9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-686485 -n functional-686485
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 logs -n 25: (1.880176028s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-686485 image load --daemon kicbase/echo-server:functional-686485 --alsologtostderr                                                             │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls                                                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image save kicbase/echo-server:functional-686485 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image rm kicbase/echo-server:functional-686485 --alsologtostderr                                                                        │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls                                                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls                                                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image save --daemon kicbase/echo-server:functional-686485 --alsologtostderr                                                             │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /etc/test/nested/copy/294425/hosts                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /etc/ssl/certs/294425.pem                                                                                                  │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /usr/share/ca-certificates/294425.pem                                                                                      │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /etc/ssl/certs/2944252.pem                                                                                                 │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /usr/share/ca-certificates/2944252.pem                                                                                     │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls --format short --alsologtostderr                                                                                               │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls --format yaml --alsologtostderr                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ ssh            │ functional-686485 ssh pgrep buildkitd                                                                                                                     │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │                     │
	│ image          │ functional-686485 image build -t localhost/my-image:functional-686485 testdata/build --alsologtostderr                                                    │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls                                                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls --format json --alsologtostderr                                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ image          │ functional-686485 image ls --format table --alsologtostderr                                                                                               │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ update-context │ functional-686485 update-context --alsologtostderr -v=2                                                                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ update-context │ functional-686485 update-context --alsologtostderr -v=2                                                                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	│ update-context │ functional-686485 update-context --alsologtostderr -v=2                                                                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:47 UTC │ 29 Sep 25 11:47 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:45:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:45:53.791192  324336 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:45:53.791403  324336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.791486  324336 out.go:374] Setting ErrFile to fd 2...
	I0929 11:45:53.791528  324336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.791899  324336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:45:53.792620  324336 out.go:368] Setting JSON to false
	I0929 11:45:53.793764  324336 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5305,"bootTime":1759141049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:45:53.793922  324336 start.go:140] virtualization:  
	I0929 11:45:53.797207  324336 out.go:179] * [functional-686485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:45:53.800330  324336 notify.go:220] Checking for updates...
	I0929 11:45:53.800284  324336 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:45:53.803621  324336 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:45:53.806526  324336 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:45:53.809385  324336 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:45:53.812217  324336 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:45:53.815162  324336 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:45:53.818404  324336 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:45:53.819014  324336 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:45:53.849864  324336 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:45:53.849987  324336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:45:53.912632  324336 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:45:53.901807135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:45:53.912736  324336 docker.go:318] overlay module found
	I0929 11:45:53.916139  324336 out.go:179] * Using the docker driver based on existing profile
	I0929 11:45:53.919151  324336 start.go:304] selected driver: docker
	I0929 11:45:53.919171  324336 start.go:924] validating driver "docker" against &{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:45:53.919272  324336 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:45:53.919402  324336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:45:53.981823  324336 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:45:53.972871121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:45:53.982236  324336 cni.go:84] Creating CNI manager for ""
	I0929 11:45:53.982305  324336 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:45:53.982354  324336 start.go:348] cluster config:
	{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:45:53.985341  324336 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 11:49:39 functional-686485 crio[4144]: time="2025-09-29 11:49:39.205997911Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=08103249-717a-45e8-af23-4892ea8ab0af name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:39 functional-686485 crio[4144]: time="2025-09-29 11:49:39.206825690Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=08103249-717a-45e8-af23-4892ea8ab0af name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:42 functional-686485 crio[4144]: time="2025-09-29 11:49:42.204935144Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0b6f3353-8732-45ab-b8e4-cc9b87d1319f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:42 functional-686485 crio[4144]: time="2025-09-29 11:49:42.205201752Z" level=info msg="Image docker.io/nginx:alpine not found" id=0b6f3353-8732-45ab-b8e4-cc9b87d1319f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:52 functional-686485 crio[4144]: time="2025-09-29 11:49:52.204475293Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4074239c-8b5a-4853-a309-7cd6e5c81245 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:52 functional-686485 crio[4144]: time="2025-09-29 11:49:52.204759025Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4074239c-8b5a-4853-a309-7cd6e5c81245 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:53 functional-686485 crio[4144]: time="2025-09-29 11:49:53.205022867Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b58ef4fa-6a65-490f-b76d-c3eaeb466d59 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:49:53 functional-686485 crio[4144]: time="2025-09-29 11:49:53.205248928Z" level=info msg="Image docker.io/nginx:alpine not found" id=b58ef4fa-6a65-490f-b76d-c3eaeb466d59 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:04 functional-686485 crio[4144]: time="2025-09-29 11:50:04.205075823Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e2401a0c-56ca-4c83-9b6b-7dcf7c346b66 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:04 functional-686485 crio[4144]: time="2025-09-29 11:50:04.205306157Z" level=info msg="Image docker.io/nginx:alpine not found" id=e2401a0c-56ca-4c83-9b6b-7dcf7c346b66 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:15 functional-686485 crio[4144]: time="2025-09-29 11:50:15.205013173Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7532c4e2-a3cc-450b-b1f9-5752d3952472 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:15 functional-686485 crio[4144]: time="2025-09-29 11:50:15.205225671Z" level=info msg="Image docker.io/nginx:alpine not found" id=7532c4e2-a3cc-450b-b1f9-5752d3952472 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:28 functional-686485 crio[4144]: time="2025-09-29 11:50:28.115204068Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b8db81b9-7fda-48d7-98d4-3d610fc12c6c name=/runtime.v1.ImageService/PullImage
	Sep 29 11:50:28 functional-686485 crio[4144]: time="2025-09-29 11:50:28.116015249Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=73ed6eda-33bf-40e2-9e16-3945e1b77a44 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:50:28 functional-686485 crio[4144]: time="2025-09-29 11:50:28.119276752Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 11:50:29 functional-686485 crio[4144]: time="2025-09-29 11:50:29.205053415Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5b2cb49b-5076-46c6-97d9-0c23a4f6c4f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:29 functional-686485 crio[4144]: time="2025-09-29 11:50:29.205280722Z" level=info msg="Image docker.io/nginx:alpine not found" id=5b2cb49b-5076-46c6-97d9-0c23a4f6c4f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:40 functional-686485 crio[4144]: time="2025-09-29 11:50:40.204593650Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c5fdae70-c619-4d60-a88a-d904edea2e18 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:40 functional-686485 crio[4144]: time="2025-09-29 11:50:40.204896032Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=c5fdae70-c619-4d60-a88a-d904edea2e18 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:41 functional-686485 crio[4144]: time="2025-09-29 11:50:41.205054267Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0be37819-437f-4676-92e8-d58c7be6053b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:41 functional-686485 crio[4144]: time="2025-09-29 11:50:41.205274674Z" level=info msg="Image docker.io/nginx:alpine not found" id=0be37819-437f-4676-92e8-d58c7be6053b name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:54 functional-686485 crio[4144]: time="2025-09-29 11:50:54.204391484Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d2ba6029-c1fe-4dd3-a566-7ecbfdc26265 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:54 functional-686485 crio[4144]: time="2025-09-29 11:50:54.204623361Z" level=info msg="Image docker.io/nginx:alpine not found" id=d2ba6029-c1fe-4dd3-a566-7ecbfdc26265 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:55 functional-686485 crio[4144]: time="2025-09-29 11:50:55.208513489Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=7e0839ac-2b67-4b32-b6d9-10ae0c842583 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:50:55 functional-686485 crio[4144]: time="2025-09-29 11:50:55.208861932Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=7e0839ac-2b67-4b32-b6d9-10ae0c842583 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9065de535e395       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   c35f82d118765       busybox-mount
	0acab5d004ce0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      19 minutes ago      Running             coredns                   2                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	206bb1f896aa2       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      19 minutes ago      Running             kube-proxy                2                   85a762676d3f5       kube-proxy-xs8dc
	5832399e5aad8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      19 minutes ago      Running             kindnet-cni               2                   b5e389b82b519       kindnet-btlb5
	e8c3e33f13e06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      19 minutes ago      Running             storage-provisioner       2                   2d780174ae1a9       storage-provisioner
	6b45c8de9c4ce       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      19 minutes ago      Running             kube-controller-manager   2                   bae0a8024391e       kube-controller-manager-functional-686485
	7e02997c4a169       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                      19 minutes ago      Running             kube-apiserver            0                   3924fd9382104       kube-apiserver-functional-686485
	7965a1dedcfc2       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      19 minutes ago      Running             kube-scheduler            2                   1b1e5f3189429       kube-scheduler-functional-686485
	6de9008a9f773       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      19 minutes ago      Running             etcd                      2                   48d4a4927f8d9       etcd-functional-686485
	60e8786739776       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      20 minutes ago      Exited              etcd                      1                   48d4a4927f8d9       etcd-functional-686485
	a43e6af95e6d5       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      20 minutes ago      Exited              kube-proxy                1                   85a762676d3f5       kube-proxy-xs8dc
	ad4e2c1e43fa8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      20 minutes ago      Exited              kindnet-cni               1                   b5e389b82b519       kindnet-btlb5
	67e39ed141cbe       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      20 minutes ago      Exited              kube-controller-manager   1                   bae0a8024391e       kube-controller-manager-functional-686485
	ff16b9dcb68ef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      20 minutes ago      Exited              coredns                   1                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	84cae6a2613ef       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      20 minutes ago      Exited              kube-scheduler            1                   1b1e5f3189429       kube-scheduler-functional-686485
	0468e3a72325e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      20 minutes ago      Exited              storage-provisioner       1                   2d780174ae1a9       storage-provisioner
	
	
	==> coredns [0acab5d004ce042926912ef6b546568fdb5f73d8e9af6c1bb44c31d95c375308] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34138 - 924 "HINFO IN 3820287925151504037.9173467966324991986. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026454485s
	
	
	==> coredns [ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44113 - 5213 "HINFO IN 332428050901775543.5573564185816815509. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031089405s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-686485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-686485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-686485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_29_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:29:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-686485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:50:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:48:01 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:48:01 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:48:01 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:48:01 +0000   Mon, 29 Sep 2025 11:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-686485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8f830ab79054939be3711c0d700e16a
	  System UUID:                bca24e78-d01e-4d6f-bf99-8242d437899c
	  Boot ID:                    3ea59072-b9ed-4996-bd90-d451fda04a88
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jc96t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-node-connect-7d85dfc575-w8tc4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-fcmb4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-functional-686485                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-btlb5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-functional-686485              250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-functional-686485     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-xs8dc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-functional-686485              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-r8kmt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qkwxf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m (x9 over 21m)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     21m                kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           21m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   NodeReady                20m                kubelet          Node functional-686485 status is now: NodeReady
	  Normal   RegisteredNode           20m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.515134] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.790647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.751861] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:36] hrtimer: interrupt took 21542036 ns
	[Sep29 11:19] kauditd_printk_skb: 8 callbacks suppressed
	[Sep29 11:47] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41] <==
	{"level":"warn","ts":"2025-09-29T11:30:21.021527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.037463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.061168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.084023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.111050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.134277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.296547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:30:41.875243Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:30:41.875311Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:30:41.875410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.014972Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.016499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016561Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016625Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.016589Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:30:42.016663Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:30:42.016688Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016768Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016813Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020537Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:30:42.020617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020650Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:30:42.020658Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [6de9008a9f773874f4e141a751358ca0571fe1f41170f35bc9f8f40c67ba6e9b] <==
	{"level":"warn","ts":"2025-09-29T11:30:58.022642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.034708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.053829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.076395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.101099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.111791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.125275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.145141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.166473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.177241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.199870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.213274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.235431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.313116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.317723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.355398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.392479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.443466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.543845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:40:57.034340Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1014}
	{"level":"info","ts":"2025-09-29T11:40:57.058031Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1014,"took":"23.086445ms","hash":2302584538,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1433600,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T11:40:57.058079Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2302584538,"revision":1014,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T11:45:57.041006Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1446}
	{"level":"info","ts":"2025-09-29T11:45:57.045454Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1446,"took":"4.005858ms","hash":4033403844,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-29T11:45:57.045506Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4033403844,"revision":1446,"compact-revision":1014}
	
	
	==> kernel <==
	 11:50:55 up  1:33,  0 users,  load average: 0.06, 0.18, 0.82
	Linux functional-686485 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5832399e5aad8fff21002574d904a6ff3227773daa9dd3dd491dbc401fa6c427] <==
	I0929 11:48:51.030242       1 main.go:301] handling current node
	I0929 11:49:01.029625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:01.029727       1 main.go:301] handling current node
	I0929 11:49:11.030080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:11.030203       1 main.go:301] handling current node
	I0929 11:49:21.037629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:21.037670       1 main.go:301] handling current node
	I0929 11:49:31.031065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:31.031104       1 main.go:301] handling current node
	I0929 11:49:41.029681       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:41.029717       1 main.go:301] handling current node
	I0929 11:49:51.030151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:49:51.030222       1 main.go:301] handling current node
	I0929 11:50:01.030145       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:01.030276       1 main.go:301] handling current node
	I0929 11:50:11.033908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:11.034018       1 main.go:301] handling current node
	I0929 11:50:21.034188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:21.034223       1 main.go:301] handling current node
	I0929 11:50:31.038257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:31.038291       1 main.go:301] handling current node
	I0929 11:50:41.030358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:41.030547       1 main.go:301] handling current node
	I0929 11:50:51.029470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:50:51.029580       1 main.go:301] handling current node
	
	
	==> kindnet [ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8] <==
	I0929 11:30:18.823322       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 11:30:18.828165       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 11:30:18.829360       1 main.go:148] setting mtu 1500 for CNI 
	I0929 11:30:18.830669       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 11:30:18.830833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T11:30:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 11:30:19.107517       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 11:30:19.107544       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 11:30:19.107553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 11:30:19.107857       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 11:30:23.208854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 11:30:23.208959       1 metrics.go:72] Registering metrics
	I0929 11:30:23.209067       1 controller.go:711] "Syncing nftables rules"
	I0929 11:30:29.106961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:29.107024       1 main.go:301] handling current node
	I0929 11:30:39.107229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:39.107260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e02997c4a169c16aa505782e2302d588dd8856611e6e53513deba2f5708373a] <==
	I0929 11:38:09.352696       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:46.834594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:30.637689       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:47.814594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:40:31.030400       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:40:59.436223       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:04.706564       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:35.474970       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:29.110774       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:55.607607       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:43:41.520357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:02.464065       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:06.687045       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:07.365017       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:55.025976       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:45:55.307070       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.134.189"}
	I0929 11:45:55.336960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.156.96"}
	I0929 11:46:22.584103       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:46:29.528357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:47:24.004118       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:47:46.527201       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:48:39.196777       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:48:51.418337       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:49:55.528612       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:50:07.910743       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc] <==
	I0929 11:30:25.542283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:30:25.542423       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:30:25.542497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:30:25.542603       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-686485"
	I0929 11:30:25.542713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.542747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:30:25.542775       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:30:25.543066       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:30:25.548027       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:30:25.553859       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:30:25.557143       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:30:25.560371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:30:25.562633       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:30:25.564920       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:30:25.567147       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:30:25.569765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:30:25.573397       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:30:25.577340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:30:25.583473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:30:25.583515       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:30:25.584345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:30:25.584416       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:30:25.593361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.600470       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:30:25.602738       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [6b45c8de9c4ceaccdd57cc9b24372eb3e9939690f47613e29be4b40cd51089ef] <==
	I0929 11:31:02.866874       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:31:02.866887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:31:02.870049       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:31:02.885200       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:31:02.892593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.889045       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:31:02.885222       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:31:02.889059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:31:02.889070       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:31:02.889086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:31:02.889097       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 11:31:02.893705       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:31:02.900998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.967539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:31:03.027900       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0929 11:45:55.123481       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.143513       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.152023       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.160591       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.166174       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.173975       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.174568       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:45:55.187755       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [206bb1f896aa2f49fa4a9317779250e75cb2dcb01e984f74509e7b0c53120a9f] <==
	I0929 11:31:00.851542       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:31:00.945683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:01.047150       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:01.047194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:31:01.047260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:01.155283       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:31:01.155348       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:01.159905       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:01.160233       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:01.160258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:01.161823       1 config.go:200] "Starting service config controller"
	I0929 11:31:01.161842       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:01.161860       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:01.161865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:01.161876       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:01.161881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:01.162622       1 config.go:309] "Starting node config controller"
	I0929 11:31:01.162640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:01.162648       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:01.262759       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:01.262805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:31:01.262852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405] <==
	I0929 11:30:23.555511       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:30:23.722149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:30:23.823478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:30:23.836615       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:30:23.836768       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:30:23.890249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:30:23.890411       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:30:23.895038       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:30:23.895383       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:30:23.895532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:23.896771       1 config.go:200] "Starting service config controller"
	I0929 11:30:23.896829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:30:23.896869       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:30:23.896900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:30:23.896940       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:30:23.896967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:30:23.897640       1 config.go:309] "Starting node config controller"
	I0929 11:30:23.897704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:30:23.897749       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:30:23.997604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:30:23.997652       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:30:23.997666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965a1dedcfc222b178cf7fa524081fbc78a93dd33338864f2d39c59fa5a3fe3] <==
	I0929 11:30:58.505418       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:30:59.404852       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:30:59.404958       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:30:59.404994       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:30:59.405035       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:30:59.453738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:59.453770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:59.455978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456059       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:59.456639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:59.560470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939] <==
	I0929 11:30:21.463798       1 serving.go:386] Generated self-signed cert in-memory
	I0929 11:30:24.008390       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:24.008422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:24.013863       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:24.013954       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 11:30:24.013977       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.014007       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:24.016191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016218       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.016240       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.114075       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.116806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.116889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877802       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:30:41.877832       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:30:41.877893       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:30:41.877933       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877964       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:41.879564       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0929 11:30:41.880048       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:30:41.880155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 11:50:53 functional-686485 kubelet[4481]: E0929 11:50:53.203927    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jc96t" podUID="dfb2492f-5d90-4b4b-a067-e71679a1b43c"
	Sep 29 11:50:54 functional-686485 kubelet[4481]: E0929 11:50:54.205089    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.209359    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-r8kmt" podUID="0e078c2c-40fe-4363-84e1-16bbafd45459"
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.298880    4481 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7, memory: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/system.slice/kubelet.service"
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.299927    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Error finding container 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Status 404 returned error can't find the container with id 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.300188    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Error finding container b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Status 404 returned error can't find the container with id b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.300436    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Error finding container 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Status 404 returned error can't find the container with id 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.300701    4481 manager.go:1116] Failed to create existing container: /crio-9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85: Error finding container 9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85: Status 404 returned error can't find the container with id 9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.301331    4481 manager.go:1116] Failed to create existing container: /crio-48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Error finding container 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Status 404 returned error can't find the container with id 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.301855    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Error finding container c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Status 404 returned error can't find the container with id c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.302035    4481 manager.go:1116] Failed to create existing container: /crio-85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Error finding container 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Status 404 returned error can't find the container with id 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.302327    4481 manager.go:1116] Failed to create existing container: /crio-1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Error finding container 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Status 404 returned error can't find the container with id 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.302532    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Error finding container 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Status 404 returned error can't find the container with id 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.302752    4481 manager.go:1116] Failed to create existing container: /crio-c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Error finding container c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Status 404 returned error can't find the container with id c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.303088    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85: Error finding container 9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85: Status 404 returned error can't find the container with id 9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.303843    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Error finding container 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Status 404 returned error can't find the container with id 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.304064    4481 manager.go:1116] Failed to create existing container: /crio-bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Error finding container bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Status 404 returned error can't find the container with id bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.304323    4481 manager.go:1116] Failed to create existing container: /crio-2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Error finding container 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Status 404 returned error can't find the container with id 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.304557    4481 manager.go:1116] Failed to create existing container: /crio-49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Error finding container 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Status 404 returned error can't find the container with id 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.304770    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Error finding container bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Status 404 returned error can't find the container with id bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.304975    4481 manager.go:1116] Failed to create existing container: /crio-b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Error finding container b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Status 404 returned error can't find the container with id b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.305211    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Error finding container 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Status 404 returned error can't find the container with id 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.525321    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146655525031382 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 29 11:50:55 functional-686485 kubelet[4481]: E0929 11:50:55.525354    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146655525031382 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199131} inodes_used:{value:104}}"
	Sep 29 11:50:56 functional-686485 kubelet[4481]: E0929 11:50:56.203791    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	
	
	==> storage-provisioner [0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d] <==
	I0929 11:30:19.697307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:30:23.184671       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:30:23.184721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:30:23.202638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:26.676372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:30.936415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:34.535393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:37.589413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.611236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.618353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.618613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:30:40.618782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33ea9649-429c-4699-8969-249f1f9741d0", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843 became leader
	I0929 11:30:40.618838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	W0929 11:30:40.620931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.626871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.721671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	
	
	==> storage-provisioner [e8c3e33f13e06057074796f0984e1abe6593c9a7d5cf652efcac19bcbcd63795] <==
	W0929 11:50:31.563559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:33.566480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:33.570937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:35.574326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:35.580765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:37.586790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:37.593270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:39.596238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:39.601614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:41.604228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:41.608887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:43.611558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:43.615760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:45.618675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:45.623931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:47.627611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:47.635302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:49.638618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:49.643061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:51.646482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:51.653033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:53.655646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:53.659832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:55.663273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:50:55.671422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
helpers_test.go:269: (dbg) Run:  kubectl --context functional-686485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-r8kmt kubernetes-dashboard-855c9754f9-qkwxf
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-686485 describe pod busybox-mount hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-r8kmt kubernetes-dashboard-855c9754f9-qkwxf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-686485 describe pod busybox-mount hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-r8kmt kubernetes-dashboard-855c9754f9-qkwxf: exit status 1 (125.061514ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:45:41 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://9065de535e395966a78f2d3f35cf0e27d79087cbb07adbda6b8c8318eb417911
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:45:44 +0000
	      Finished:     Mon, 29 Sep 2025 11:45:45 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rd9nz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rd9nz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-686485
	  Normal  Pulling    5m15s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m12s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.37s (3.37s including waiting). Image size: 3774172 bytes.
	  Normal  Created    5m12s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-jc96t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:37:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcj6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lcj6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jc96t to functional-686485
	  Normal   Pulling    10m (x5 over 13m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     9m46s (x5 over 13m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     9m46s (x5 over 13m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    3m35s (x38 over 13m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3m35s (x38 over 13m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w8tc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:35:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd2f2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qd2f2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8tc4 to functional-686485
	  Normal   Pulling    11m (x5 over 15m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     11m (x5 over 14m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     11m (x5 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x41 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x51 over 14m)     kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:24 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vslmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vslmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  19m                   default-scheduler  Successfully assigned default/nginx-svc to functional-686485
	  Warning  Failed     18m                   kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     14m (x2 over 17m)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    13m (x5 over 19m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
	  Warning  Failed     8m38s (x28 over 18m)  kubelet            Error: ImagePullBackOff
	  Warning  Failed     3m (x4 over 16m)      kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m48s (x46 over 18m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qtcb2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qtcb2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  19m                   default-scheduler  Successfully assigned default/sp-pod to functional-686485
	  Normal   Pulling    12m (x5 over 19m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
	  Warning  Failed     8m15s (x27 over 18m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x46 over 18m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     119s (x7 over 18m)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-r8kmt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qkwxf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-686485 describe pod busybox-mount hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-r8kmt kubernetes-dashboard-855c9754f9-qkwxf: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-686485 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-686485 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w8tc4" [fcebf2d8-e667-466a-9b18-4dfaedada25e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I0929 11:35:40.386929  294425 retry.go:31] will retry after 7.266998512s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:35:47.654763  294425 retry.go:31] will retry after 16.802686807s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:36:04.457767  294425 retry.go:31] will retry after 26.44089365s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:36:30.899825  294425 retry.go:31] will retry after 45.613326246s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 11:45:35.074514666 +0000 UTC m=+1519.872287457
functional_test.go:1645: (dbg) Run:  kubectl --context functional-686485 describe po hello-node-connect-7d85dfc575-w8tc4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-686485 describe po hello-node-connect-7d85dfc575-w8tc4 -n default:
Name:             hello-node-connect-7d85dfc575-w8tc4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-686485/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:35:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd2f2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qd2f2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8tc4 to functional-686485
Normal   Pulling    6m17s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m17s (x5 over 9m35s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m17s (x5 over 9m35s)   kubelet            Error: ErrImagePull
Warning  Failed     4m30s (x18 over 9m35s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m50s (x21 over 9m35s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-686485 logs hello-node-connect-7d85dfc575-w8tc4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-686485 logs hello-node-connect-7d85dfc575-w8tc4 -n default: exit status 1 (102.514057ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w8tc4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-686485 logs hello-node-connect-7d85dfc575-w8tc4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-686485 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w8tc4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-686485/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:35:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd2f2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qd2f2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8tc4 to functional-686485
Normal   Pulling    6m17s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m17s (x5 over 9m35s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m17s (x5 over 9m35s)   kubelet            Error: ErrImagePull
Warning  Failed     4m30s (x18 over 9m35s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m50s (x21 over 9m35s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-686485 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-686485 logs -l app=hello-node-connect: exit status 1 (86.188488ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w8tc4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-686485 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-686485 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.255.228
IPs:                      10.110.255.228
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30416/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-686485
helpers_test.go:243: (dbg) docker inspect functional-686485:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	        "Created": "2025-09-29T11:28:49.947415423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:28:50.027029754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hosts",
	        "LogPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7-json.log",
	        "Name": "/functional-686485",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-686485:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-686485",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	                "LowerDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573-init/diff:/var/lib/docker/overlay2/83e06d49de89e61a1046432dce270924281d24e14aa4bd929fb6d16b3962f5cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-686485",
	                "Source": "/var/lib/docker/volumes/functional-686485/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-686485",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-686485",
	                "name.minikube.sigs.k8s.io": "functional-686485",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba7e2032b4c7c1852078cf95c118e09db7ba68bd9e71a188e5c4248100ffad60",
	            "SandboxKey": "/var/run/docker/netns/ba7e2032b4c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-686485": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:a8:81:2a:9e:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4625b3a7a589d85dc9c16fe0c52282f454d62ce28670a509256f159d69a12956",
	                    "EndpointID": "acc054f48a40506f78db053c88ca5833fc530f5b8f98803782cea0419d707da1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-686485",
	                        "94cef4d5f9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-686485 -n functional-686485
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 logs -n 25: (1.723427439s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ kubectl │ functional-686485 kubectl -- --context functional-686485 get pods                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -p functional-686485 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:31 UTC │
	│ service │ invalid-svc -p functional-686485                                                                                          │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ config  │ functional-686485 config unset cpus                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ cp      │ functional-686485 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ config  │ functional-686485 config set cpus 2                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config unset cpus                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /home/docker/cp-test.txt                                              │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ ssh     │ functional-686485 ssh echo hello                                                                                          │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ cp      │ functional-686485 cp functional-686485:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd798651978/001/cp-test.txt │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh cat /etc/hostname                                                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /home/docker/cp-test.txt                                              │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ cp      │ functional-686485 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ addons  │ functional-686485 addons list                                                                                             │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	│ addons  │ functional-686485 addons list -o json                                                                                     │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:40.225137  317523 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:40.225259  317523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:40.225263  317523 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:40.225267  317523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:40.226011  317523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:30:40.226454  317523 out.go:368] Setting JSON to false
	I0929 11:30:40.227359  317523 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4391,"bootTime":1759141049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:30:40.227418  317523 start.go:140] virtualization:  
	I0929 11:30:40.230954  317523 out.go:179] * [functional-686485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:30:40.235100  317523 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:30:40.235273  317523 notify.go:220] Checking for updates...
	I0929 11:30:40.240931  317523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:40.243759  317523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:30:40.246733  317523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:30:40.249737  317523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:30:40.252577  317523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:40.255935  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:30:40.256027  317523 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:40.277371  317523 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:30:40.277475  317523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:30:40.346062  317523 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 11:30:40.335707186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:30:40.346154  317523 docker.go:318] overlay module found
	I0929 11:30:40.349336  317523 out.go:179] * Using the docker driver based on existing profile
	I0929 11:30:40.352329  317523 start.go:304] selected driver: docker
	I0929 11:30:40.352339  317523 start.go:924] validating driver "docker" against &{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:40.352465  317523 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:40.352582  317523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:30:40.409392  317523 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 11:30:40.399671028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:30:40.409784  317523 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:40.409808  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:30:40.409860  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:30:40.409901  317523 start.go:348] cluster config:
	{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:40.413125  317523 out.go:179] * Starting "functional-686485" primary control-plane node in "functional-686485" cluster
	I0929 11:30:40.415940  317523 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:30:40.418812  317523 out.go:179] * Pulling base image v0.0.48 ...
	I0929 11:30:40.421632  317523 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:30:40.421693  317523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:30:40.421694  317523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:30:40.421700  317523 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:40.421784  317523 preload.go:172] Found /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 11:30:40.421792  317523 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:30:40.421940  317523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/config.json ...
	I0929 11:30:40.447466  317523 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 11:30:40.447479  317523 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 11:30:40.447503  317523 cache.go:232] Successfully downloaded all kic artifacts
	I0929 11:30:40.447534  317523 start.go:360] acquireMachinesLock for functional-686485: {Name:mk00044b677bdabb62e4bfe5467000365c4e2351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:40.447613  317523 start.go:364] duration metric: took 59.015µs to acquireMachinesLock for "functional-686485"
	I0929 11:30:40.447637  317523 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:30:40.447642  317523 fix.go:54] fixHost starting: 
	I0929 11:30:40.447940  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:30:40.465368  317523 fix.go:112] recreateIfNeeded on functional-686485: state=Running err=<nil>
	W0929 11:30:40.465386  317523 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:30:40.468562  317523 out.go:252] * Updating the running docker "functional-686485" container ...
	I0929 11:30:40.468608  317523 machine.go:93] provisionDockerMachine start ...
	I0929 11:30:40.468720  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.486217  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.486580  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.486588  317523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:30:40.633320  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-686485
	
	I0929 11:30:40.633334  317523 ubuntu.go:182] provisioning hostname "functional-686485"
	I0929 11:30:40.633392  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.651117  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.651412  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.651421  317523 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-686485 && echo "functional-686485" | sudo tee /etc/hostname
	I0929 11:30:40.802992  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-686485
	
	I0929 11:30:40.803071  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.821257  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.821672  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.821688  317523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-686485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-686485/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-686485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:40.960349  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:40.960363  317523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-292570/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-292570/.minikube}
	I0929 11:30:40.960380  317523 ubuntu.go:190] setting up certificates
	I0929 11:30:40.960388  317523 provision.go:84] configureAuth start
	I0929 11:30:40.960454  317523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686485
	I0929 11:30:40.978710  317523 provision.go:143] copyHostCerts
	I0929 11:30:40.978781  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem, removing ...
	I0929 11:30:40.978796  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem
	I0929 11:30:40.978867  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem (1123 bytes)
	I0929 11:30:40.978958  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem, removing ...
	I0929 11:30:40.978962  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem
	I0929 11:30:40.978985  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem (1675 bytes)
	I0929 11:30:40.979033  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem, removing ...
	I0929 11:30:40.979036  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem
	I0929 11:30:40.979058  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem (1078 bytes)
	I0929 11:30:40.979100  317523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem org=jenkins.functional-686485 san=[127.0.0.1 192.168.49.2 functional-686485 localhost minikube]
	I0929 11:30:41.499490  317523 provision.go:177] copyRemoteCerts
	I0929 11:30:41.499542  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:41.499586  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:41.517869  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:41.617486  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 11:30:41.643048  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 11:30:41.670335  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:30:41.694948  317523 provision.go:87] duration metric: took 734.547593ms to configureAuth
	I0929 11:30:41.694965  317523 ubuntu.go:206] setting minikube options for container-runtime
	I0929 11:30:41.695171  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:30:41.695273  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:41.712769  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:41.713064  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:41.713076  317523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:30:47.130694  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:30:47.130709  317523 machine.go:96] duration metric: took 6.66209378s to provisionDockerMachine
	I0929 11:30:47.130719  317523 start.go:293] postStartSetup for "functional-686485" (driver="docker")
	I0929 11:30:47.130729  317523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:47.130793  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:47.130838  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.148247  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.245268  317523 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:47.248477  317523 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 11:30:47.248499  317523 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 11:30:47.248509  317523 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 11:30:47.248514  317523 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 11:30:47.248524  317523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/addons for local assets ...
	I0929 11:30:47.248579  317523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/files for local assets ...
	I0929 11:30:47.248658  317523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem -> 2944252.pem in /etc/ssl/certs
	I0929 11:30:47.248731  317523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/test/nested/copy/294425/hosts -> hosts in /etc/test/nested/copy/294425
	I0929 11:30:47.248774  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/294425
	I0929 11:30:47.257279  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 11:30:47.281536  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/test/nested/copy/294425/hosts --> /etc/test/nested/copy/294425/hosts (40 bytes)
	I0929 11:30:47.306790  317523 start.go:296] duration metric: took 176.05755ms for postStartSetup
	I0929 11:30:47.306861  317523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:30:47.306915  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.323659  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.417666  317523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 11:30:47.422378  317523 fix.go:56] duration metric: took 6.974728863s for fixHost
	I0929 11:30:47.422392  317523 start.go:83] releasing machines lock for "functional-686485", held for 6.974772446s
	I0929 11:30:47.422460  317523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686485
	I0929 11:30:47.439352  317523 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:47.439394  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.439421  317523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:47.439469  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.457211  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.470599  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.679334  317523 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:47.683451  317523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:30:47.827973  317523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 11:30:47.832022  317523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:47.840480  317523 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 11:30:47.840547  317523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:47.849661  317523 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 11:30:47.849674  317523 start.go:495] detecting cgroup driver to use...
	I0929 11:30:47.849704  317523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 11:30:47.849746  317523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:47.861913  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:47.872955  317523 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:30:47.873021  317523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:30:47.886336  317523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:30:47.898931  317523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:30:48.041362  317523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:30:48.173932  317523 docker.go:234] disabling docker service ...
	I0929 11:30:48.174008  317523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:30:48.186556  317523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:30:48.198594  317523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:30:48.333993  317523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:30:48.475214  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:48.486626  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:48.503145  317523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:30:48.503219  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.513058  317523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:30:48.513112  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.523292  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.533251  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.544311  317523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:48.553764  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.563419  317523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.572936  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.582493  317523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:48.590693  317523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:48.598935  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:48.720450  317523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:30:49.435056  317523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:30:49.435114  317523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:30:49.438861  317523 start.go:563] Will wait 60s for crictl version
	I0929 11:30:49.438909  317523 ssh_runner.go:195] Run: which crictl
	I0929 11:30:49.442209  317523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:49.482217  317523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 11:30:49.482303  317523 ssh_runner.go:195] Run: crio --version
	I0929 11:30:49.523608  317523 ssh_runner.go:195] Run: crio --version
	I0929 11:30:49.565710  317523 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 11:30:49.568646  317523 cli_runner.go:164] Run: docker network inspect functional-686485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:30:49.584565  317523 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:49.591482  317523 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0929 11:30:49.594642  317523 kubeadm.go:875] updating cluster {Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:49.594750  317523 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:30:49.594841  317523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:30:49.640122  317523 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:30:49.640135  317523 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:30:49.640186  317523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:30:49.678186  317523 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:30:49.678198  317523 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:49.678209  317523 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0929 11:30:49.678302  317523 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-686485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:49.678391  317523 ssh_runner.go:195] Run: crio config
	I0929 11:30:49.729255  317523 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0929 11:30:49.729275  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:30:49.729285  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:30:49.729292  317523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:49.729313  317523 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-686485 NodeName:functional-686485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:49.729431  317523 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-686485"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:49.729494  317523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:49.738456  317523 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:49.738513  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:49.746935  317523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0929 11:30:49.764652  317523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:49.782005  317523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0929 11:30:49.799649  317523 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:49.803394  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.935303  317523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:49.948101  317523 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485 for IP: 192.168.49.2
	I0929 11:30:49.948112  317523 certs.go:194] generating shared ca certs ...
	I0929 11:30:49.948127  317523 certs.go:226] acquiring lock for ca certs: {Name:mkd338253a13587776ce07e6238e0355c4b0e958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:49.948255  317523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key
	I0929 11:30:49.948412  317523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key
	I0929 11:30:49.948419  317523 certs.go:256] generating profile certs ...
	I0929 11:30:49.948527  317523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.key
	I0929 11:30:49.948576  317523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.key.67211a0c
	I0929 11:30:49.948615  317523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.key
	I0929 11:30:49.948719  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem (1338 bytes)
	W0929 11:30:49.948750  317523 certs.go:480] ignoring /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425_empty.pem, impossibly tiny 0 bytes
	I0929 11:30:49.948757  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 11:30:49.948780  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem (1078 bytes)
	I0929 11:30:49.948806  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:49.948830  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:49.948873  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 11:30:49.949451  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:49.974174  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 11:30:49.998686  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:50.023114  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:50.053164  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 11:30:50.079969  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:50.112173  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:30:50.139566  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:30:50.164934  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem --> /usr/share/ca-certificates/294425.pem (1338 bytes)
	I0929 11:30:50.190235  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /usr/share/ca-certificates/2944252.pem (1708 bytes)
	I0929 11:30:50.215351  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:30:50.239913  317523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:30:50.258103  317523 ssh_runner.go:195] Run: openssl version
	I0929 11:30:50.263399  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294425.pem && ln -fs /usr/share/ca-certificates/294425.pem /etc/ssl/certs/294425.pem"
	I0929 11:30:50.272610  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.277627  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:28 /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.277684  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.288999  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294425.pem /etc/ssl/certs/51391683.0"
	I0929 11:30:50.320721  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2944252.pem && ln -fs /usr/share/ca-certificates/2944252.pem /etc/ssl/certs/2944252.pem"
	I0929 11:30:50.329849  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.333808  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:28 /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.333863  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.351023  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2944252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:30:50.363544  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:30:50.375034  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.379195  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.379253  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.388657  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:30:50.402681  317523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:30:50.412651  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:30:50.422828  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:30:50.432308  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:30:50.440143  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:30:50.449547  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:30:50.461902  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:30:50.469290  317523 kubeadm.go:392] StartCluster: {Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:50.469383  317523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:30:50.469448  317523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:30:50.508922  317523 cri.go:89] found id: "60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41"
	I0929 11:30:50.508934  317523 cri.go:89] found id: "a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405"
	I0929 11:30:50.508937  317523 cri.go:89] found id: "ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8"
	I0929 11:30:50.508940  317523 cri.go:89] found id: "e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745"
	I0929 11:30:50.508942  317523 cri.go:89] found id: "67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc"
	I0929 11:30:50.508945  317523 cri.go:89] found id: "ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45"
	I0929 11:30:50.508948  317523 cri.go:89] found id: "84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939"
	I0929 11:30:50.508950  317523 cri.go:89] found id: "0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d"
	I0929 11:30:50.508953  317523 cri.go:89] found id: ""
	I0929 11:30:50.509001  317523 ssh_runner.go:195] Run: sudo runc list -f json
	I0929 11:30:50.531747  317523 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d/userdata","rootfs":"/var/lib/containers/storage/overlay/8a2d5074c9c82677731008fdb72e08c1a028aca1396c6940014b095b899552f9/merged","created":"2025-09-29T11:30:18.559089546Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termi
nationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.200195131Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1c8159cf-d9b8-4964-81a9-3a541a78ede1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1c8159cf-d9b8-4964-81a9-3a541a78ede1/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisio
ner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8a2d5074c9c82677731008fdb72e08c1a028aca1396c6940014b095b899552f9/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_1c8159cf-d9b8-4964-81a9-3a541a78ede1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1c8159cf-d9b8-4964-81a9-3a541a78ede1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/containers/storage-provisioner/2f72bbd8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/volumes/kubernetes.io~projected/kube-api-access-w6jl5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1c8159cf-d9b8-4964-81a9-3a541a78ede1","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-te
st\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-29T11:30:05.366114476Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41/userdata","rootfs":"/var/lib/containers/storage/overlay/be5375df1300e05d61645921900821993787e9fee53b63b0bd1010b2cb45ae54/merged","created":"2025-09-29T11:3
0:18.520786659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41","io.kubern
etes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.350788084Z","io.kubernetes.cri-o.Image":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bda7994784646446b924dd8e7bf7821a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-686485_bda7994784646446b924dd8e7bf7821a/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be5375df1300e05d61645921900821993787e9fee53b63b0bd1010b2cb45ae54/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-686485_kube-system_bda7994784646446b924
dd8e7bf7821a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-686485_kube-system_bda7994784646446b924dd8e7bf7821a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bda7994784646446b924dd8e7bf7821a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bda7994784646446b924dd8e7bf7821a/containers/etcd/ae993c10\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\"
:\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bda7994784646446b924dd8e7bf7821a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"bda7994784646446b924dd8e7bf7821a","kubernetes.io/config.seen":"2025-09-29T11:29:09.893762990Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc/userdata","rootfs":"/var/lib/containers/storage/overlay/053129d03
a764b917487f095b2ad152ce427b72e2dba748f41cb6591c4e94245/merged","created":"2025-09-29T11:30:18.497632792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}
","io.kubernetes.cri-o.ContainerID":"67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.264114424Z","io.kubernetes.cri-o.Image":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2f45a864dc9b5e76810cfe1e08ccba6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-686485_2f45a864dc9b5e76810cfe1e08ccba6d/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/053129d03a764b917487f095b2ad152ce427b72e2dba748f41cb6591c4e94245/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-686485_kube-system_2f45a864dc9b5e76810cfe1e08ccba6d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-686485_kube-system_2f45a864dc9b5e76810cfe1e08ccba6d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\
":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2f45a864dc9b5e76810cfe1e08ccba6d/containers/kube-controller-manager/d414e2de\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2f45a864dc9b5e76810cfe1e08ccba6d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2f45a864dc9b5e76810cfe1e08ccba6d","kubernetes.io/config.hash":"2f45a864dc9b5e76810cfe1e08ccba6d","kubernetes.io/config.seen":"2025-09-29T11:29:09.893768791Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939/us
erdata","rootfs":"/var/lib/containers/storage/overlay/704e75912e674338dad40353adc855f787c4eaea09e8fe3036bed7bfe563f7e7/merged","created":"2025-09-29T11:30:18.396730965Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\
",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.210610796Z","io.kubernetes.cri-o.Image":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cafc9679f97aaf5bc65c227ee7fb3ea4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-686485_cafc9679f97aaf5bc65c227ee7fb3ea4/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/704e75912e674338dad40353adc855f787c4eaea09e8fe3036bed7bfe563f7e7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-686485_kube-system_cafc9679f97aaf5bc65c227ee7fb3ea4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-686485_kube-system_cafc9679f97aaf5bc65c227ee7fb3ea4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cafc9679f97aaf5bc65c227ee7fb3ea4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cafc9679f97aaf5bc65c227ee7fb3ea4/containers/kube-scheduler/a62d9265\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cafc9679f97aaf5bc65c227ee7fb3ea4","kubernetes.io/config.hash":"cafc9679f97aaf5bc65c227ee7fb3ea4","kubernetes.io/config.seen":"2025-09-29T11:29:09.893770136Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405/userdata","rootf
s":"/var/lib/containers/storage/overlay/ec874bcedf3d08b621825c9551a9b11f3e9eec1424bf1e1b6953b629686c9c4e/merged","created":"2025-09-29T11:30:18.643114721Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e2e56a4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e2e56a4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.32623131Z","io.kubernetes.cri-o.Image":"6fc3
2d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri-o.ImageRef":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-xs8dc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ec9ec386-a485-42d0-950e-56883d7a9f26\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-xs8dc_ec9ec386-a485-42d0-950e-56883d7a9f26/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ec874bcedf3d08b621825c9551a9b11f3e9eec1424bf1e1b6953b629686c9c4e/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-xs8dc_kube-system_ec9ec386-a485-42d0-950e-56883d7a9f26_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/85a762676d3f540a8
676b9caee1d08735895ed4d0abb409744afef1c63724770/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-xs8dc_kube-system_ec9ec386-a485-42d0-950e-56883d7a9f26_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-
950e-56883d7a9f26/containers/kube-proxy/ad7989fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/volumes/kubernetes.io~projected/kube-api-access-tw2r7\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-xs8dc","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ec9ec386-a485-42d0-950e-56883d7a9f26","kubernetes.io/config.seen":"2025-09-29T11:29:23.902068916Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8","pid"
:0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8/userdata","rootfs":"/var/lib/containers/storage/overlay/720bee73aa666a1e41ba2c569c69ab631969eaa87d4f67584d5032b74416b632/merged","created":"2025-09-29T11:30:18.4994781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148
f297185273a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.317522528Z","io.kubernetes.cri-o.Image":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-btlb5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"45ae98fe-ca6f-4349-82af-33448daa0ce5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-btlb5_45ae98fe-ca6f-4349-82af-33448daa0ce5/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/720bee73aa666a1e41ba2c569c69ab631969eaa87d4f67584d5032b74416b632/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-
cni_kindnet-btlb5_kube-system_45ae98fe-ca6f-4349-82af-33448daa0ce5_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-btlb5_kube-system_45ae98fe-ca6f-4349-82af-33448daa0ce5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/etc-hosts\",\"rea
donly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/containers/kindnet-cni/3d7e20b1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/volumes/kubernetes.io~projected/kube-api-access-hmn24\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-btlb5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"45ae98fe-ca6f-4349-82af-33448daa0ce5","kubernetes.io/config.seen":"2025-09-29T11:29:23.853287861Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e3f
7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745/userdata","rootfs":"/var/lib/containers/storage/overlay/745035ccbe63ced4b436c83165325f8df1f59901b82da7b738f5890cb2d09e8b/merged","created":"2025-09-29T11:30:18.510283773Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protoc
ol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.298619036Z","io.kubernetes.cri-o.Image":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8a00df89b1e084f8ad2ff0b1b29ca855\"}","io.kubernetes.cri-o.Log
Path":"/var/log/pods/kube-system_kube-apiserver-functional-686485_8a00df89b1e084f8ad2ff0b1b29ca855/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/745035ccbe63ced4b436c83165325f8df1f59901b82da7b738f5890cb2d09e8b/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-686485_kube-system_8a00df89b1e084f8ad2ff0b1b29ca855_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-686485_kube-system_8a00df89b1e084f8ad2ff0b1b29ca855_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri
-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8a00df89b1e084f8ad2ff0b1b29ca855/containers/kube-apiserver/bdc7c875\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8a00df89b1e084f8ad2ff0b1b29ca855/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path
\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8a00df89b1e084f8ad2ff0b1b29ca855","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"8a00df89b1e084f8ad2ff0b1b29ca855","kubernetes.io/config.seen":"2025-09-29T11:29:09.893767043Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45/userdata","rootfs":"/var/lib/containers/storage/overlay/74a388207b65601ff06a6dbce7a1121de4a8d23b1d89aca83c186c241c989bd7/merged","created":"2025-09-29
T11:30:18.515037221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"conta
inerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.24718286Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.Labels
":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-fcmb4\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-fcmb4_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/74a388207b65601ff06a6dbce7a1121de4a8d23b1d89aca83c186c241c989bd7/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-fcmb4_kube-system_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-fcmb4_kube-system
_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/containers/coredns/952368c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/volumes/kubernetes.io~project
ed/kube-api-access-g8mr4\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-fcmb4","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0","kubernetes.io/config.seen":"2025-09-29T11:30:05.373946526Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0929 11:30:50.532424  317523 cri.go:126] list returned 8 containers
	I0929 11:30:50.532433  317523 cri.go:129] container: {ID:0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d Status:stopped}
	I0929 11:30:50.532445  317523 cri.go:135] skipping {0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532452  317523 cri.go:129] container: {ID:60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 Status:stopped}
	I0929 11:30:50.532458  317523 cri.go:135] skipping {60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532464  317523 cri.go:129] container: {ID:67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc Status:stopped}
	I0929 11:30:50.532468  317523 cri.go:135] skipping {67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532473  317523 cri.go:129] container: {ID:84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 Status:stopped}
	I0929 11:30:50.532477  317523 cri.go:135] skipping {84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532482  317523 cri.go:129] container: {ID:a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 Status:stopped}
	I0929 11:30:50.532486  317523 cri.go:135] skipping {a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532491  317523 cri.go:129] container: {ID:ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 Status:stopped}
	I0929 11:30:50.532496  317523 cri.go:135] skipping {ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532501  317523 cri.go:129] container: {ID:e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 Status:stopped}
	I0929 11:30:50.532509  317523 cri.go:135] skipping {e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532513  317523 cri.go:129] container: {ID:ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 Status:stopped}
	I0929 11:30:50.532517  317523 cri.go:135] skipping {ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532571  317523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:30:50.541227  317523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 11:30:50.541246  317523 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 11:30:50.541303  317523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 11:30:50.549684  317523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.550208  317523 kubeconfig.go:125] found "functional-686485" server: "https://192.168.49.2:8441"
	I0929 11:30:50.551699  317523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 11:30:50.560657  317523 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-29 11:29:00.664822374 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-29 11:30:49.795791580 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0929 11:30:50.560678  317523 kubeadm.go:1152] stopping kube-system containers ...
	I0929 11:30:50.560690  317523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 11:30:50.560741  317523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:30:50.601767  317523 cri.go:89] found id: "60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41"
	I0929 11:30:50.601779  317523 cri.go:89] found id: "a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405"
	I0929 11:30:50.601791  317523 cri.go:89] found id: "ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8"
	I0929 11:30:50.601794  317523 cri.go:89] found id: "e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745"
	I0929 11:30:50.601798  317523 cri.go:89] found id: "67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc"
	I0929 11:30:50.601800  317523 cri.go:89] found id: "ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45"
	I0929 11:30:50.601803  317523 cri.go:89] found id: "84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939"
	I0929 11:30:50.601805  317523 cri.go:89] found id: "0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d"
	I0929 11:30:50.601807  317523 cri.go:89] found id: ""
	I0929 11:30:50.601812  317523 cri.go:252] Stopping containers: [60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d]
	I0929 11:30:50.601872  317523 ssh_runner.go:195] Run: which crictl
	I0929 11:30:50.605637  317523 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d
	I0929 11:30:50.679596  317523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 11:30:50.795127  317523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:30:50.803494  317523 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Sep 29 11:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Sep 29 11:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 29 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep 29 11:29 /etc/kubernetes/scheduler.conf
	
	I0929 11:30:50.803554  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0929 11:30:50.812162  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0929 11:30:50.820593  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.820649  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:30:50.829113  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0929 11:30:50.837696  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.837754  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:30:50.846272  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0929 11:30:50.855030  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.855086  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:30:50.863890  317523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:30:50.872613  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:50.924801  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:54.786934  317523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.862106588s)
	I0929 11:30:54.786955  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:54.983284  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:55.059966  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:55.166220  317523 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:30:55.166302  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:55.666423  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:56.166829  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:56.193412  317523 api_server.go:72] duration metric: took 1.027207446s to wait for apiserver process to appear ...
	I0929 11:30:56.193426  317523 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:30:56.193445  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.312484  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:30:59.312503  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:30:59.312515  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.415351  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:30:59.415369  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:30:59.693702  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.704600  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:30:59.704617  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:00.197678  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:00.303451  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:31:00.303474  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:00.694213  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:00.734452  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:31:00.734480  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:01.193704  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:01.201847  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 11:31:01.215530  317523 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:01.215547  317523 api_server.go:131] duration metric: took 5.022116368s to wait for apiserver health ...
	I0929 11:31:01.215556  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:31:01.215562  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:31:01.219382  317523 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 11:31:01.222671  317523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 11:31:01.227111  317523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 11:31:01.227124  317523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 11:31:01.251706  317523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 11:31:01.860828  317523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:01.864488  317523 system_pods.go:59] 8 kube-system pods found
	I0929 11:31:01.864516  317523 system_pods.go:61] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:01.864523  317523 system_pods.go:61] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:01.864530  317523 system_pods.go:61] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:01.864537  317523 system_pods.go:61] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:01.864544  317523 system_pods.go:61] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:01.864549  317523 system_pods.go:61] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:01.864555  317523 system_pods.go:61] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:01.864559  317523 system_pods.go:61] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:01.864565  317523 system_pods.go:74] duration metric: took 3.725714ms to wait for pod list to return data ...
	I0929 11:31:01.864571  317523 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:01.867276  317523 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 11:31:01.867296  317523 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:01.867309  317523 node_conditions.go:105] duration metric: took 2.731246ms to run NodePressure ...
	I0929 11:31:01.867327  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:31:02.119636  317523 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 11:31:02.123297  317523 kubeadm.go:735] kubelet initialised
	I0929 11:31:02.123309  317523 kubeadm.go:736] duration metric: took 3.657888ms waiting for restarted kubelet to initialise ...
	I0929 11:31:02.123328  317523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:02.132141  317523 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:02.132152  317523 kubeadm.go:593] duration metric: took 11.590901006s to restartPrimaryControlPlane
	I0929 11:31:02.132160  317523 kubeadm.go:394] duration metric: took 11.662881317s to StartCluster
	I0929 11:31:02.132174  317523 settings.go:142] acquiring lock: {Name:mk8da0e06d1edc552f3cec9ed26678491ca734d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:02.132236  317523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:31:02.132893  317523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/kubeconfig: {Name:mk84aa46812be3352ca2874bd06be6025c5058bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:02.133102  317523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:31:02.133364  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:31:02.133425  317523 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 11:31:02.133480  317523 addons.go:69] Setting storage-provisioner=true in profile "functional-686485"
	I0929 11:31:02.133496  317523 addons.go:238] Setting addon storage-provisioner=true in "functional-686485"
	W0929 11:31:02.133502  317523 addons.go:247] addon storage-provisioner should already be in state true
	I0929 11:31:02.133521  317523 host.go:66] Checking if "functional-686485" exists ...
	I0929 11:31:02.133568  317523 addons.go:69] Setting default-storageclass=true in profile "functional-686485"
	I0929 11:31:02.133576  317523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-686485"
	I0929 11:31:02.133839  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.134370  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.138530  317523 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:02.142236  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:02.173913  317523 addons.go:238] Setting addon default-storageclass=true in "functional-686485"
	W0929 11:31:02.173925  317523 addons.go:247] addon default-storageclass should already be in state true
	I0929 11:31:02.173949  317523 host.go:66] Checking if "functional-686485" exists ...
	I0929 11:31:02.174385  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.177640  317523 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:02.180601  317523 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:02.180614  317523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:02.180705  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:31:02.215760  317523 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:02.215773  317523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:02.215839  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:31:02.217945  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:31:02.242376  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:31:02.369806  317523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:02.384778  317523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:02.390179  317523 node_ready.go:35] waiting up to 6m0s for node "functional-686485" to be "Ready" ...
	I0929 11:31:02.393699  317523 node_ready.go:49] node "functional-686485" is "Ready"
	I0929 11:31:02.393715  317523 node_ready.go:38] duration metric: took 3.517099ms for node "functional-686485" to be "Ready" ...
	I0929 11:31:02.393726  317523 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:02.393789  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:02.399304  317523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:02.573911  317523 api_server.go:72] duration metric: took 440.78555ms to wait for apiserver process to appear ...
	I0929 11:31:02.573923  317523 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:02.573939  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:02.588884  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 11:31:02.594488  317523 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:02.594515  317523 api_server.go:131] duration metric: took 20.585874ms to wait for apiserver health ...
	I0929 11:31:02.594522  317523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:02.599763  317523 system_pods.go:59] 8 kube-system pods found
	I0929 11:31:02.599781  317523 system_pods.go:61] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:02.599788  317523 system_pods.go:61] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:02.599794  317523 system_pods.go:61] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:02.599800  317523 system_pods.go:61] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:02.599828  317523 system_pods.go:61] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:02.599832  317523 system_pods.go:61] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:02.599838  317523 system_pods.go:61] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:02.599842  317523 system_pods.go:61] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:02.599846  317523 system_pods.go:74] duration metric: took 5.319727ms to wait for pod list to return data ...
	I0929 11:31:02.599852  317523 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:02.605743  317523 default_sa.go:45] found service account: "default"
	I0929 11:31:02.605758  317523 default_sa.go:55] duration metric: took 5.900761ms for default service account to be created ...
	I0929 11:31:02.605766  317523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:02.609753  317523 system_pods.go:86] 8 kube-system pods found
	I0929 11:31:02.609772  317523 system_pods.go:89] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:02.609782  317523 system_pods.go:89] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:02.609786  317523 system_pods.go:89] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:02.609792  317523 system_pods.go:89] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:02.609797  317523 system_pods.go:89] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:02.609800  317523 system_pods.go:89] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:02.609805  317523 system_pods.go:89] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:02.609809  317523 system_pods.go:89] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:02.609815  317523 system_pods.go:126] duration metric: took 4.043779ms to wait for k8s-apps to be running ...
	I0929 11:31:02.609821  317523 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:02.609878  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:03.287576  317523 system_svc.go:56] duration metric: took 677.746921ms WaitForService to wait for kubelet
	I0929 11:31:03.287590  317523 kubeadm.go:578] duration metric: took 1.154468727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:03.287604  317523 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:03.290620  317523 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 11:31:03.290950  317523 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 11:31:03.290963  317523 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:03.290973  317523 node_conditions.go:105] duration metric: took 3.364913ms to run NodePressure ...
	I0929 11:31:03.290984  317523 start.go:241] waiting for startup goroutines ...
	I0929 11:31:03.293538  317523 addons.go:514] duration metric: took 1.160125897s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 11:31:03.293562  317523 start.go:246] waiting for cluster config update ...
	I0929 11:31:03.293572  317523 start.go:255] writing updated cluster config ...
	I0929 11:31:03.293859  317523 ssh_runner.go:195] Run: rm -f paused
	I0929 11:31:03.297484  317523 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:31:03.301294  317523 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fcmb4" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:31:05.307765  317523 pod_ready.go:104] pod "coredns-66bc5c9577-fcmb4" is not "Ready", error: <nil>
	I0929 11:31:06.306148  317523 pod_ready.go:94] pod "coredns-66bc5c9577-fcmb4" is "Ready"
	I0929 11:31:06.306162  317523 pod_ready.go:86] duration metric: took 3.004856058s for pod "coredns-66bc5c9577-fcmb4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.308566  317523 pod_ready.go:83] waiting for pod "etcd-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.814314  317523 pod_ready.go:94] pod "etcd-functional-686485" is "Ready"
	I0929 11:31:06.814327  317523 pod_ready.go:86] duration metric: took 505.749759ms for pod "etcd-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.816604  317523 pod_ready.go:83] waiting for pod "kube-apiserver-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:31:08.822508  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	W0929 11:31:11.321360  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	W0929 11:31:13.322623  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	I0929 11:31:13.822109  317523 pod_ready.go:94] pod "kube-apiserver-functional-686485" is "Ready"
	I0929 11:31:13.822122  317523 pod_ready.go:86] duration metric: took 7.00550579s for pod "kube-apiserver-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:13.824337  317523 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.330269  317523 pod_ready.go:94] pod "kube-controller-manager-functional-686485" is "Ready"
	I0929 11:31:14.330285  317523 pod_ready.go:86] duration metric: took 505.935533ms for pod "kube-controller-manager-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.332558  317523 pod_ready.go:83] waiting for pod "kube-proxy-xs8dc" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.337087  317523 pod_ready.go:94] pod "kube-proxy-xs8dc" is "Ready"
	I0929 11:31:14.337101  317523 pod_ready.go:86] duration metric: took 4.530992ms for pod "kube-proxy-xs8dc" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.339291  317523 pod_ready.go:83] waiting for pod "kube-scheduler-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.420586  317523 pod_ready.go:94] pod "kube-scheduler-functional-686485" is "Ready"
	I0929 11:31:14.420601  317523 pod_ready.go:86] duration metric: took 81.297139ms for pod "kube-scheduler-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.420610  317523 pod_ready.go:40] duration metric: took 11.123107056s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:31:14.480677  317523 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 11:31:14.483743  317523 out.go:179] * Done! kubectl is now configured to use "functional-686485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:43:13 functional-686485 crio[4144]: time="2025-09-29 11:43:13.204953629Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d30d34d7-22cd-412c-a850-b67c63704db3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:13 functional-686485 crio[4144]: time="2025-09-29 11:43:13.205170753Z" level=info msg="Image docker.io/nginx:alpine not found" id=d30d34d7-22cd-412c-a850-b67c63704db3 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:24 functional-686485 crio[4144]: time="2025-09-29 11:43:24.205122197Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=63c90a4f-19fe-426d-b9f8-a79b2139ed22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:24 functional-686485 crio[4144]: time="2025-09-29 11:43:24.205344424Z" level=info msg="Image docker.io/nginx:alpine not found" id=63c90a4f-19fe-426d-b9f8-a79b2139ed22 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:38 functional-686485 crio[4144]: time="2025-09-29 11:43:38.204678378Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7171cfa0-74d8-4ba1-a8ab-5c9a16faa0fa name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:38 functional-686485 crio[4144]: time="2025-09-29 11:43:38.204908933Z" level=info msg="Image docker.io/nginx:alpine not found" id=7171cfa0-74d8-4ba1-a8ab-5c9a16faa0fa name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:50 functional-686485 crio[4144]: time="2025-09-29 11:43:50.205046531Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1f2a1409-c610-4880-b388-a19273a28044 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:43:50 functional-686485 crio[4144]: time="2025-09-29 11:43:50.205271531Z" level=info msg="Image docker.io/nginx:alpine not found" id=1f2a1409-c610-4880-b388-a19273a28044 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:01 functional-686485 crio[4144]: time="2025-09-29 11:44:01.205130919Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3356c438-4fd4-45b1-95bd-6613339ca158 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:44:01 functional-686485 crio[4144]: time="2025-09-29 11:44:01.205290733Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fdf13b00-5a0d-44a7-80e5-0d50875cea82 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:01 functional-686485 crio[4144]: time="2025-09-29 11:44:01.205513140Z" level=info msg="Image docker.io/nginx:alpine not found" id=fdf13b00-5a0d-44a7-80e5-0d50875cea82 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:13 functional-686485 crio[4144]: time="2025-09-29 11:44:13.204590271Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=19b0e554-c7d5-47db-9f4e-89188dd11d93 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:13 functional-686485 crio[4144]: time="2025-09-29 11:44:13.204808560Z" level=info msg="Image docker.io/nginx:alpine not found" id=19b0e554-c7d5-47db-9f4e-89188dd11d93 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:28 functional-686485 crio[4144]: time="2025-09-29 11:44:28.204735682Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b00b9567-53f3-4ec8-b0ef-553d3bb38487 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:28 functional-686485 crio[4144]: time="2025-09-29 11:44:28.204960116Z" level=info msg="Image docker.io/nginx:alpine not found" id=b00b9567-53f3-4ec8-b0ef-553d3bb38487 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:43 functional-686485 crio[4144]: time="2025-09-29 11:44:43.204348048Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a68da226-27f9-4153-bebe-698ef907ca9f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:43 functional-686485 crio[4144]: time="2025-09-29 11:44:43.204618479Z" level=info msg="Image docker.io/nginx:alpine not found" id=a68da226-27f9-4153-bebe-698ef907ca9f name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:55 functional-686485 crio[4144]: time="2025-09-29 11:44:55.206534672Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=66f08b93-19d4-4df4-a46d-36d6db946143 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:44:55 functional-686485 crio[4144]: time="2025-09-29 11:44:55.206757384Z" level=info msg="Image docker.io/nginx:alpine not found" id=66f08b93-19d4-4df4-a46d-36d6db946143 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:06 functional-686485 crio[4144]: time="2025-09-29 11:45:06.204256932Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cf778636-af22-424f-a659-a3bdc7412bd0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:06 functional-686485 crio[4144]: time="2025-09-29 11:45:06.204504102Z" level=info msg="Image docker.io/nginx:alpine not found" id=cf778636-af22-424f-a659-a3bdc7412bd0 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:19 functional-686485 crio[4144]: time="2025-09-29 11:45:19.206794040Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=efb9826c-1b7f-4d3f-91d6-75254bb2d27c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:19 functional-686485 crio[4144]: time="2025-09-29 11:45:19.207016563Z" level=info msg="Image docker.io/nginx:alpine not found" id=efb9826c-1b7f-4d3f-91d6-75254bb2d27c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:32 functional-686485 crio[4144]: time="2025-09-29 11:45:32.204822160Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cd4153f4-70f8-4250-beb3-a4807795ce87 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:45:32 functional-686485 crio[4144]: time="2025-09-29 11:45:32.205050656Z" level=info msg="Image docker.io/nginx:alpine not found" id=cd4153f4-70f8-4250-beb3-a4807795ce87 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0acab5d004ce0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   14 minutes ago      Running             coredns                   2                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	206bb1f896aa2       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf   14 minutes ago      Running             kube-proxy                2                   85a762676d3f5       kube-proxy-xs8dc
	5832399e5aad8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   14 minutes ago      Running             kindnet-cni               2                   b5e389b82b519       kindnet-btlb5
	e8c3e33f13e06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   14 minutes ago      Running             storage-provisioner       2                   2d780174ae1a9       storage-provisioner
	6b45c8de9c4ce       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570   14 minutes ago      Running             kube-controller-manager   2                   bae0a8024391e       kube-controller-manager-functional-686485
	7e02997c4a169       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be   14 minutes ago      Running             kube-apiserver            0                   3924fd9382104       kube-apiserver-functional-686485
	7965a1dedcfc2       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee   14 minutes ago      Running             kube-scheduler            2                   1b1e5f3189429       kube-scheduler-functional-686485
	6de9008a9f773       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 minutes ago      Running             etcd                      2                   48d4a4927f8d9       etcd-functional-686485
	60e8786739776       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 minutes ago      Exited              etcd                      1                   48d4a4927f8d9       etcd-functional-686485
	a43e6af95e6d5       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf   15 minutes ago      Exited              kube-proxy                1                   85a762676d3f5       kube-proxy-xs8dc
	ad4e2c1e43fa8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   15 minutes ago      Exited              kindnet-cni               1                   b5e389b82b519       kindnet-btlb5
	67e39ed141cbe       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570   15 minutes ago      Exited              kube-controller-manager   1                   bae0a8024391e       kube-controller-manager-functional-686485
	ff16b9dcb68ef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   15 minutes ago      Exited              coredns                   1                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	84cae6a2613ef       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee   15 minutes ago      Exited              kube-scheduler            1                   1b1e5f3189429       kube-scheduler-functional-686485
	0468e3a72325e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   15 minutes ago      Exited              storage-provisioner       1                   2d780174ae1a9       storage-provisioner
	
	
	==> coredns [0acab5d004ce042926912ef6b546568fdb5f73d8e9af6c1bb44c31d95c375308] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34138 - 924 "HINFO IN 3820287925151504037.9173467966324991986. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026454485s
	
	
	==> coredns [ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44113 - 5213 "HINFO IN 332428050901775543.5573564185816815509. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031089405s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-686485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-686485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-686485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_29_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:29:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-686485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:45:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:45:28 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:45:28 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:45:28 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:45:28 +0000   Mon, 29 Sep 2025 11:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-686485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8f830ab79054939be3711c0d700e16a
	  System UUID:                bca24e78-d01e-4d6f-bf99-8242d437899c
	  Boot ID:                    3ea59072-b9ed-4996-bd90-d451fda04a88
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jc96t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  default                     hello-node-connect-7d85dfc575-w8tc4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-fcmb4                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-functional-686485                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-btlb5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-686485             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-686485    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-xs8dc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-686485             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x9 over 16m)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m                kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   NodeReady                15m                kubelet          Node functional-686485 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.515134] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.790647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.751861] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:36] hrtimer: interrupt took 21542036 ns
	[Sep29 11:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41] <==
	{"level":"warn","ts":"2025-09-29T11:30:21.021527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.037463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.061168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.084023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.111050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.134277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.296547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:30:41.875243Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:30:41.875311Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:30:41.875410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.014972Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.016499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016561Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016625Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.016589Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:30:42.016663Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:30:42.016688Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016768Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016813Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020537Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:30:42.020617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020650Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:30:42.020658Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [6de9008a9f773874f4e141a751358ca0571fe1f41170f35bc9f8f40c67ba6e9b] <==
	{"level":"warn","ts":"2025-09-29T11:30:57.971780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:57.985521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.005551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.022642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.034708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.053829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.076395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.101099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.111791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.125275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.145141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.166473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.177241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.199870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.213274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.235431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.313116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.317723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.355398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.392479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.443466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.543845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:40:57.034340Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1014}
	{"level":"info","ts":"2025-09-29T11:40:57.058031Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1014,"took":"23.086445ms","hash":2302584538,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1433600,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T11:40:57.058079Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2302584538,"revision":1014,"compact-revision":-1}
	
	
	==> kernel <==
	 11:45:37 up  1:28,  0 users,  load average: 0.18, 0.25, 1.09
	Linux functional-686485 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5832399e5aad8fff21002574d904a6ff3227773daa9dd3dd491dbc401fa6c427] <==
	I0929 11:43:31.032489       1 main.go:301] handling current node
	I0929 11:43:41.029436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:43:41.029478       1 main.go:301] handling current node
	I0929 11:43:51.035999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:43:51.036042       1 main.go:301] handling current node
	I0929 11:44:01.029638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:01.029748       1 main.go:301] handling current node
	I0929 11:44:11.029471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:11.029508       1 main.go:301] handling current node
	I0929 11:44:21.034533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:21.034569       1 main.go:301] handling current node
	I0929 11:44:31.032388       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:31.032427       1 main.go:301] handling current node
	I0929 11:44:41.030223       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:41.030276       1 main.go:301] handling current node
	I0929 11:44:51.029477       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:44:51.029512       1 main.go:301] handling current node
	I0929 11:45:01.029575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:45:01.029717       1 main.go:301] handling current node
	I0929 11:45:11.030148       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:45:11.030188       1 main.go:301] handling current node
	I0929 11:45:21.036095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:45:21.036140       1 main.go:301] handling current node
	I0929 11:45:31.030147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:45:31.030253       1 main.go:301] handling current node
	
	
	==> kindnet [ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8] <==
	I0929 11:30:18.823322       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 11:30:18.828165       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 11:30:18.829360       1 main.go:148] setting mtu 1500 for CNI 
	I0929 11:30:18.830669       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 11:30:18.830833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T11:30:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 11:30:19.107517       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 11:30:19.107544       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 11:30:19.107553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 11:30:19.107857       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 11:30:23.208854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 11:30:23.208959       1 metrics.go:72] Registering metrics
	I0929 11:30:23.209067       1 controller.go:711] "Syncing nftables rules"
	I0929 11:30:29.106961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:29.107024       1 main.go:301] handling current node
	I0929 11:30:39.107229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:39.107260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e02997c4a169c16aa505782e2302d588dd8856611e6e53513deba2f5708373a] <==
	I0929 11:32:23.679655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:33:05.149033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:33:28.646220       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:12.334621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:48.731415       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:23.156411       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:34.720037       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.255.228"}
	I0929 11:35:52.785441       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:36:52.281760       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:37:16.897488       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.63.234"}
	I0929 11:37:22.121391       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:09.352696       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:46.834594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:30.637689       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:47.814594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:40:31.030400       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:40:59.436223       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:41:04.706564       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:35.474970       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:29.110774       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:42:55.607607       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:43:41.520357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:44:02.464065       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:06.687045       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:45:07.365017       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc] <==
	I0929 11:30:25.542283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:30:25.542423       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:30:25.542497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:30:25.542603       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-686485"
	I0929 11:30:25.542713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.542747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:30:25.542775       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:30:25.543066       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:30:25.548027       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:30:25.553859       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:30:25.557143       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:30:25.560371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:30:25.562633       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:30:25.564920       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:30:25.567147       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:30:25.569765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:30:25.573397       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:30:25.577340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:30:25.583473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:30:25.583515       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:30:25.584345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:30:25.584416       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:30:25.593361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.600470       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:30:25.602738       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [6b45c8de9c4ceaccdd57cc9b24372eb3e9939690f47613e29be4b40cd51089ef] <==
	I0929 11:31:02.858832       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:31:02.858844       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 11:31:02.866795       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:31:02.858745       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:31:02.886970       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:31:02.887132       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-686485"
	I0929 11:31:02.887246       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:31:02.866846       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:31:02.866874       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:31:02.866887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:31:02.870049       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:31:02.885200       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:31:02.892593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.889045       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:31:02.885222       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:31:02.889059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:31:02.889070       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:31:02.889086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:31:02.889097       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 11:31:02.893705       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:31:02.900998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.967539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:31:03.027900       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [206bb1f896aa2f49fa4a9317779250e75cb2dcb01e984f74509e7b0c53120a9f] <==
	I0929 11:31:00.851542       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:31:00.945683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:01.047150       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:01.047194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:31:01.047260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:01.155283       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:31:01.155348       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:01.159905       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:01.160233       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:01.160258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:01.161823       1 config.go:200] "Starting service config controller"
	I0929 11:31:01.161842       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:01.161860       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:01.161865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:01.161876       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:01.161881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:01.162622       1 config.go:309] "Starting node config controller"
	I0929 11:31:01.162640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:01.162648       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:01.262759       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:01.262805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:31:01.262852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405] <==
	I0929 11:30:23.555511       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:30:23.722149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:30:23.823478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:30:23.836615       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:30:23.836768       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:30:23.890249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:30:23.890411       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:30:23.895038       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:30:23.895383       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:30:23.895532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:23.896771       1 config.go:200] "Starting service config controller"
	I0929 11:30:23.896829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:30:23.896869       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:30:23.896900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:30:23.896940       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:30:23.896967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:30:23.897640       1 config.go:309] "Starting node config controller"
	I0929 11:30:23.897704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:30:23.897749       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:30:23.997604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:30:23.997652       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:30:23.997666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965a1dedcfc222b178cf7fa524081fbc78a93dd33338864f2d39c59fa5a3fe3] <==
	I0929 11:30:58.505418       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:30:59.404852       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:30:59.404958       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:30:59.404994       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:30:59.405035       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:30:59.453738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:59.453770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:59.455978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456059       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:59.456639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:59.560470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939] <==
	I0929 11:30:21.463798       1 serving.go:386] Generated self-signed cert in-memory
	I0929 11:30:24.008390       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:24.008422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:24.013863       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:24.013954       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 11:30:24.013977       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.014007       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:24.016191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016218       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.016240       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.114075       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.116806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.116889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877802       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:30:41.877832       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:30:41.877893       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:30:41.877933       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877964       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:41.879564       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0929 11:30:41.880048       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:30:41.880155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 11:44:55 functional-686485 kubelet[4481]: E0929 11:44:55.299516    4481 manager.go:1116] Failed to create existing container: /crio-c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Error finding container c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Status 404 returned error can't find the container with id c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4
	Sep 29 11:44:55 functional-686485 kubelet[4481]: E0929 11:44:55.299690    4481 manager.go:1116] Failed to create existing container: /crio-49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Error finding container 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Status 404 returned error can't find the container with id 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a
	Sep 29 11:44:55 functional-686485 kubelet[4481]: E0929 11:44:55.299866    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Error finding container 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Status 404 returned error can't find the container with id 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7
	Sep 29 11:44:55 functional-686485 kubelet[4481]: E0929 11:44:55.454667    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146295454416203 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:44:55 functional-686485 kubelet[4481]: E0929 11:44:55.454704    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146295454416203 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:01 functional-686485 kubelet[4481]: E0929 11:45:01.204802    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jc96t" podUID="dfb2492f-5d90-4b4b-a067-e71679a1b43c"
	Sep 29 11:45:05 functional-686485 kubelet[4481]: E0929 11:45:05.204909    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8tc4" podUID="fcebf2d8-e667-466a-9b18-4dfaedada25e"
	Sep 29 11:45:05 functional-686485 kubelet[4481]: E0929 11:45:05.456737    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146305456468689 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:05 functional-686485 kubelet[4481]: E0929 11:45:05.456774    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146305456468689 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:06 functional-686485 kubelet[4481]: E0929 11:45:06.204805    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:45:08 functional-686485 kubelet[4481]: E0929 11:45:08.204716    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	Sep 29 11:45:15 functional-686485 kubelet[4481]: E0929 11:45:15.458994    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146315458753009 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:15 functional-686485 kubelet[4481]: E0929 11:45:15.459033    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146315458753009 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:16 functional-686485 kubelet[4481]: E0929 11:45:16.204323    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jc96t" podUID="dfb2492f-5d90-4b4b-a067-e71679a1b43c"
	Sep 29 11:45:19 functional-686485 kubelet[4481]: E0929 11:45:19.206104    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8tc4" podUID="fcebf2d8-e667-466a-9b18-4dfaedada25e"
	Sep 29 11:45:19 functional-686485 kubelet[4481]: E0929 11:45:19.207301    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:45:20 functional-686485 kubelet[4481]: E0929 11:45:20.204397    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	Sep 29 11:45:25 functional-686485 kubelet[4481]: E0929 11:45:25.461261    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146325461015320 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:25 functional-686485 kubelet[4481]: E0929 11:45:25.461303    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146325461015320 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:30 functional-686485 kubelet[4481]: E0929 11:45:30.204263    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w8tc4" podUID="fcebf2d8-e667-466a-9b18-4dfaedada25e"
	Sep 29 11:45:31 functional-686485 kubelet[4481]: E0929 11:45:31.203897    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-jc96t" podUID="dfb2492f-5d90-4b4b-a067-e71679a1b43c"
	Sep 29 11:45:32 functional-686485 kubelet[4481]: E0929 11:45:32.205314    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:45:33 functional-686485 kubelet[4481]: E0929 11:45:33.204217    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	Sep 29 11:45:35 functional-686485 kubelet[4481]: E0929 11:45:35.463467    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146335463204267 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:45:35 functional-686485 kubelet[4481]: E0929 11:45:35.463502    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146335463204267 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	
	
	==> storage-provisioner [0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d] <==
	I0929 11:30:19.697307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:30:23.184671       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:30:23.184721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:30:23.202638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:26.676372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:30.936415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:34.535393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:37.589413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.611236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.618353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.618613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:30:40.618782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33ea9649-429c-4699-8969-249f1f9741d0", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843 became leader
	I0929 11:30:40.618838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	W0929 11:30:40.620931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.626871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.721671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	
	
	==> storage-provisioner [e8c3e33f13e06057074796f0984e1abe6593c9a7d5cf652efcac19bcbcd63795] <==
	W0929 11:45:12.054458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:14.057102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:14.061790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:16.065344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:16.073221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:18.077541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:18.082783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:20.091515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:20.096614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:22.099489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:22.103962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:24.106960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:24.111397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:26.114535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:26.121680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:28.125114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:28.129270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:30.132897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:30.137626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:32.140911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:32.147049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:34.150629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:34.154986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:36.164605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:45:36.213793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
helpers_test.go:269: (dbg) Run:  kubectl --context functional-686485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-686485 describe pod hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-686485 describe pod hello-node-75c85bcc94-jc96t hello-node-connect-7d85dfc575-w8tc4 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-jc96t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:37:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcj6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lcj6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m21s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jc96t to functional-686485
	  Normal   Pulling    4m57s (x5 over 8m21s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m28s (x5 over 8m21s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     4m28s (x5 over 8m21s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m10s (x16 over 8m20s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m1s (x21 over 8m20s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w8tc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:35:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qd2f2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qd2f2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w8tc4 to functional-686485
	  Normal   Pulling    6m20s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m20s (x5 over 9m38s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m20s (x5 over 9m38s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m33s (x18 over 9m38s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m53s (x21 over 9m38s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:24 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vslmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vslmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/nginx-svc to functional-686485
	  Warning  Failed     13m                   kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     9m38s (x2 over 12m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m13s (x5 over 14m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m43s (x5 over 13m)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m43s (x2 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m (x25 over 13m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m20s (x28 over 13m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qtcb2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qtcb2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/sp-pod to functional-686485
	  Normal   Pulling    7m39s (x5 over 14m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m9s (x5 over 12m)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m9s (x5 over 12m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3m37s (x24 over 12m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m57s (x27 over 12m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.72s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003584432s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-686485 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-686485 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-686485 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-686485 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3f43496a-bd99-48c7-a4c6-2775c0a9ffc0] Pending
helpers_test.go:352: "sp-pod" [3f43496a-bd99-48c7-a4c6-2775c0a9ffc0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0929 11:33:25.872806  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:33:53.583317  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 11:35:31.467817675 +0000 UTC m=+916.265590466
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-686485 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-686485 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-686485/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:31:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qtcb2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-qtcb2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-686485
Normal   Pulling    85s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     20s (x3 over 2m52s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     20s (x3 over 2m52s)  kubelet            Error: ErrImagePull
Normal   BackOff    5s (x3 over 2m51s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     5s (x3 over 2m51s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-686485 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-686485 logs sp-pod -n default: exit status 1 (103.556445ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-686485 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-686485
helpers_test.go:243: (dbg) docker inspect functional-686485:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	        "Created": "2025-09-29T11:28:49.947415423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:28:50.027029754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/hosts",
	        "LogPath": "/var/lib/docker/containers/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7-json.log",
	        "Name": "/functional-686485",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-686485:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-686485",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7",
	                "LowerDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573-init/diff:/var/lib/docker/overlay2/83e06d49de89e61a1046432dce270924281d24e14aa4bd929fb6d16b3962f5cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5e419d7c9fbd4352c87019c8d2517a08e6c12ece36b320195b6ede3d1482573/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-686485",
	                "Source": "/var/lib/docker/volumes/functional-686485/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-686485",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-686485",
	                "name.minikube.sigs.k8s.io": "functional-686485",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba7e2032b4c7c1852078cf95c118e09db7ba68bd9e71a188e5c4248100ffad60",
	            "SandboxKey": "/var/run/docker/netns/ba7e2032b4c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-686485": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:a8:81:2a:9e:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4625b3a7a589d85dc9c16fe0c52282f454d62ce28670a509256f159d69a12956",
	                    "EndpointID": "acc054f48a40506f78db053c88ca5833fc530f5b8f98803782cea0419d707da1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-686485",
	                        "94cef4d5f9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-686485 -n functional-686485
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 logs -n 25
I0929 11:35:33.462486  294425 retry.go:31] will retry after 6.924188474s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 logs -n 25: (1.744643739s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ cache   │ functional-686485 cache reload                                                                                            │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ ssh     │ functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ kubectl │ functional-686485 kubectl -- --context functional-686485 get pods                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -p functional-686485 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:31 UTC │
	│ service │ invalid-svc -p functional-686485                                                                                          │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ config  │ functional-686485 config unset cpus                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ cp      │ functional-686485 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ config  │ functional-686485 config set cpus 2                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config unset cpus                                                                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /home/docker/cp-test.txt                                              │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ config  │ functional-686485 config get cpus                                                                                         │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ ssh     │ functional-686485 ssh echo hello                                                                                          │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ cp      │ functional-686485 cp functional-686485:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd798651978/001/cp-test.txt │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh cat /etc/hostname                                                                                   │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /home/docker/cp-test.txt                                              │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	│ cp      │ functional-686485 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ ssh     │ functional-686485 ssh -n functional-686485 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │ 29 Sep 25 11:31 UTC │
	│ tunnel  │ functional-686485 tunnel --alsologtostderr                                                                                │ functional-686485 │ jenkins │ v1.37.0 │ 29 Sep 25 11:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:30:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:30:40.225137  317523 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:30:40.225259  317523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:40.225263  317523 out.go:374] Setting ErrFile to fd 2...
	I0929 11:30:40.225267  317523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:30:40.226011  317523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:30:40.226454  317523 out.go:368] Setting JSON to false
	I0929 11:30:40.227359  317523 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4391,"bootTime":1759141049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:30:40.227418  317523 start.go:140] virtualization:  
	I0929 11:30:40.230954  317523 out.go:179] * [functional-686485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:30:40.235100  317523 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:30:40.235273  317523 notify.go:220] Checking for updates...
	I0929 11:30:40.240931  317523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:30:40.243759  317523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:30:40.246733  317523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:30:40.249737  317523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:30:40.252577  317523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:30:40.255935  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:30:40.256027  317523 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:30:40.277371  317523 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:30:40.277475  317523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:30:40.346062  317523 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 11:30:40.335707186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:30:40.346154  317523 docker.go:318] overlay module found
	I0929 11:30:40.349336  317523 out.go:179] * Using the docker driver based on existing profile
	I0929 11:30:40.352329  317523 start.go:304] selected driver: docker
	I0929 11:30:40.352339  317523 start.go:924] validating driver "docker" against &{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:40.352465  317523 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:30:40.352582  317523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:30:40.409392  317523 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-29 11:30:40.399671028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:30:40.409784  317523 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:30:40.409808  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:30:40.409860  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:30:40.409901  317523 start.go:348] cluster config:
	{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:40.413125  317523 out.go:179] * Starting "functional-686485" primary control-plane node in "functional-686485" cluster
	I0929 11:30:40.415940  317523 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:30:40.418812  317523 out.go:179] * Pulling base image v0.0.48 ...
	I0929 11:30:40.421632  317523 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:30:40.421693  317523 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:30:40.421694  317523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:30:40.421700  317523 cache.go:58] Caching tarball of preloaded images
	I0929 11:30:40.421784  317523 preload.go:172] Found /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 11:30:40.421792  317523 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:30:40.421940  317523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/config.json ...
	I0929 11:30:40.447466  317523 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 11:30:40.447479  317523 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 11:30:40.447503  317523 cache.go:232] Successfully downloaded all kic artifacts
	I0929 11:30:40.447534  317523 start.go:360] acquireMachinesLock for functional-686485: {Name:mk00044b677bdabb62e4bfe5467000365c4e2351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:30:40.447613  317523 start.go:364] duration metric: took 59.015µs to acquireMachinesLock for "functional-686485"
	I0929 11:30:40.447637  317523 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:30:40.447642  317523 fix.go:54] fixHost starting: 
	I0929 11:30:40.447940  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:30:40.465368  317523 fix.go:112] recreateIfNeeded on functional-686485: state=Running err=<nil>
	W0929 11:30:40.465386  317523 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:30:40.468562  317523 out.go:252] * Updating the running docker "functional-686485" container ...
	I0929 11:30:40.468608  317523 machine.go:93] provisionDockerMachine start ...
	I0929 11:30:40.468720  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.486217  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.486580  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.486588  317523 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:30:40.633320  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-686485
	
	I0929 11:30:40.633334  317523 ubuntu.go:182] provisioning hostname "functional-686485"
	I0929 11:30:40.633392  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.651117  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.651412  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.651421  317523 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-686485 && echo "functional-686485" | sudo tee /etc/hostname
	I0929 11:30:40.802992  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-686485
	
	I0929 11:30:40.803071  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:40.821257  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:40.821672  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:40.821688  317523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-686485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-686485/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-686485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:30:40.960349  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:30:40.960363  317523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-292570/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-292570/.minikube}
	I0929 11:30:40.960380  317523 ubuntu.go:190] setting up certificates
	I0929 11:30:40.960388  317523 provision.go:84] configureAuth start
	I0929 11:30:40.960454  317523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686485
	I0929 11:30:40.978710  317523 provision.go:143] copyHostCerts
	I0929 11:30:40.978781  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem, removing ...
	I0929 11:30:40.978796  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem
	I0929 11:30:40.978867  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem (1123 bytes)
	I0929 11:30:40.978958  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem, removing ...
	I0929 11:30:40.978962  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem
	I0929 11:30:40.978985  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem (1675 bytes)
	I0929 11:30:40.979033  317523 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem, removing ...
	I0929 11:30:40.979036  317523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem
	I0929 11:30:40.979058  317523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem (1078 bytes)
	I0929 11:30:40.979100  317523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem org=jenkins.functional-686485 san=[127.0.0.1 192.168.49.2 functional-686485 localhost minikube]
	I0929 11:30:41.499490  317523 provision.go:177] copyRemoteCerts
	I0929 11:30:41.499542  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:30:41.499586  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:41.517869  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:41.617486  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 11:30:41.643048  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 11:30:41.670335  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:30:41.694948  317523 provision.go:87] duration metric: took 734.547593ms to configureAuth
	I0929 11:30:41.694965  317523 ubuntu.go:206] setting minikube options for container-runtime
	I0929 11:30:41.695171  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:30:41.695273  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:41.712769  317523 main.go:141] libmachine: Using SSH client type: native
	I0929 11:30:41.713064  317523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0929 11:30:41.713076  317523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:30:47.130694  317523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:30:47.130709  317523 machine.go:96] duration metric: took 6.66209378s to provisionDockerMachine
	I0929 11:30:47.130719  317523 start.go:293] postStartSetup for "functional-686485" (driver="docker")
	I0929 11:30:47.130729  317523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:30:47.130793  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:30:47.130838  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.148247  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.245268  317523 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:30:47.248477  317523 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 11:30:47.248499  317523 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 11:30:47.248509  317523 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 11:30:47.248514  317523 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 11:30:47.248524  317523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/addons for local assets ...
	I0929 11:30:47.248579  317523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/files for local assets ...
	I0929 11:30:47.248658  317523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem -> 2944252.pem in /etc/ssl/certs
	I0929 11:30:47.248731  317523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/test/nested/copy/294425/hosts -> hosts in /etc/test/nested/copy/294425
	I0929 11:30:47.248774  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/294425
	I0929 11:30:47.257279  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 11:30:47.281536  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/test/nested/copy/294425/hosts --> /etc/test/nested/copy/294425/hosts (40 bytes)
	I0929 11:30:47.306790  317523 start.go:296] duration metric: took 176.05755ms for postStartSetup
	I0929 11:30:47.306861  317523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:30:47.306915  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.323659  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.417666  317523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 11:30:47.422378  317523 fix.go:56] duration metric: took 6.974728863s for fixHost
	I0929 11:30:47.422392  317523 start.go:83] releasing machines lock for "functional-686485", held for 6.974772446s
	I0929 11:30:47.422460  317523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686485
	I0929 11:30:47.439352  317523 ssh_runner.go:195] Run: cat /version.json
	I0929 11:30:47.439394  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.439421  317523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:30:47.439469  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:30:47.457211  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.470599  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:30:47.679334  317523 ssh_runner.go:195] Run: systemctl --version
	I0929 11:30:47.683451  317523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:30:47.827973  317523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 11:30:47.832022  317523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:47.840480  317523 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 11:30:47.840547  317523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:30:47.849661  317523 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 11:30:47.849674  317523 start.go:495] detecting cgroup driver to use...
	I0929 11:30:47.849704  317523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 11:30:47.849746  317523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:30:47.861913  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:30:47.872955  317523 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:30:47.873021  317523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:30:47.886336  317523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:30:47.898931  317523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:30:48.041362  317523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:30:48.173932  317523 docker.go:234] disabling docker service ...
	I0929 11:30:48.174008  317523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:30:48.186556  317523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:30:48.198594  317523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:30:48.333993  317523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:30:48.475214  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:30:48.486626  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:30:48.503145  317523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:30:48.503219  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.513058  317523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:30:48.513112  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.523292  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.533251  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.544311  317523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:30:48.553764  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.563419  317523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.572936  317523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:30:48.582493  317523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:30:48.590693  317523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:30:48.598935  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:48.720450  317523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:30:49.435056  317523 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:30:49.435114  317523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:30:49.438861  317523 start.go:563] Will wait 60s for crictl version
	I0929 11:30:49.438909  317523 ssh_runner.go:195] Run: which crictl
	I0929 11:30:49.442209  317523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:30:49.482217  317523 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 11:30:49.482303  317523 ssh_runner.go:195] Run: crio --version
	I0929 11:30:49.523608  317523 ssh_runner.go:195] Run: crio --version
	I0929 11:30:49.565710  317523 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 11:30:49.568646  317523 cli_runner.go:164] Run: docker network inspect functional-686485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:30:49.584565  317523 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 11:30:49.591482  317523 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0929 11:30:49.594642  317523 kubeadm.go:875] updating cluster {Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:30:49.594750  317523 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:30:49.594841  317523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:30:49.640122  317523 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:30:49.640135  317523 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:30:49.640186  317523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:30:49.678186  317523 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:30:49.678198  317523 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:30:49.678209  317523 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0929 11:30:49.678302  317523 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-686485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:30:49.678391  317523 ssh_runner.go:195] Run: crio config
	I0929 11:30:49.729255  317523 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0929 11:30:49.729275  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:30:49.729285  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:30:49.729292  317523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:30:49.729313  317523 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-686485 NodeName:functional-686485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:30:49.729431  317523 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-686485"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:30:49.729494  317523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:30:49.738456  317523 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:30:49.738513  317523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:30:49.746935  317523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0929 11:30:49.764652  317523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:30:49.782005  317523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0929 11:30:49.799649  317523 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 11:30:49.803394  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:30:49.935303  317523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:30:49.948101  317523 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485 for IP: 192.168.49.2
	I0929 11:30:49.948112  317523 certs.go:194] generating shared ca certs ...
	I0929 11:30:49.948127  317523 certs.go:226] acquiring lock for ca certs: {Name:mkd338253a13587776ce07e6238e0355c4b0e958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:30:49.948255  317523 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key
	I0929 11:30:49.948412  317523 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key
	I0929 11:30:49.948419  317523 certs.go:256] generating profile certs ...
	I0929 11:30:49.948527  317523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.key
	I0929 11:30:49.948576  317523 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.key.67211a0c
	I0929 11:30:49.948615  317523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.key
	I0929 11:30:49.948719  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem (1338 bytes)
	W0929 11:30:49.948750  317523 certs.go:480] ignoring /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425_empty.pem, impossibly tiny 0 bytes
	I0929 11:30:49.948757  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 11:30:49.948780  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem (1078 bytes)
	I0929 11:30:49.948806  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:30:49.948830  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem (1675 bytes)
	I0929 11:30:49.948873  317523 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 11:30:49.949451  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:30:49.974174  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 11:30:49.998686  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:30:50.023114  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:30:50.053164  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 11:30:50.079969  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:30:50.112173  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:30:50.139566  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:30:50.164934  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem --> /usr/share/ca-certificates/294425.pem (1338 bytes)
	I0929 11:30:50.190235  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /usr/share/ca-certificates/2944252.pem (1708 bytes)
	I0929 11:30:50.215351  317523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:30:50.239913  317523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:30:50.258103  317523 ssh_runner.go:195] Run: openssl version
	I0929 11:30:50.263399  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294425.pem && ln -fs /usr/share/ca-certificates/294425.pem /etc/ssl/certs/294425.pem"
	I0929 11:30:50.272610  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.277627  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:28 /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.277684  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294425.pem
	I0929 11:30:50.288999  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294425.pem /etc/ssl/certs/51391683.0"
	I0929 11:30:50.320721  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2944252.pem && ln -fs /usr/share/ca-certificates/2944252.pem /etc/ssl/certs/2944252.pem"
	I0929 11:30:50.329849  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.333808  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:28 /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.333863  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2944252.pem
	I0929 11:30:50.351023  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2944252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:30:50.363544  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:30:50.375034  317523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.379195  317523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.379253  317523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:30:50.388657  317523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:30:50.402681  317523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:30:50.412651  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:30:50.422828  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:30:50.432308  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:30:50.440143  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:30:50.449547  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:30:50.461902  317523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:30:50.469290  317523 kubeadm.go:392] StartCluster: {Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:30:50.469383  317523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:30:50.469448  317523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:30:50.508922  317523 cri.go:89] found id: "60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41"
	I0929 11:30:50.508934  317523 cri.go:89] found id: "a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405"
	I0929 11:30:50.508937  317523 cri.go:89] found id: "ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8"
	I0929 11:30:50.508940  317523 cri.go:89] found id: "e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745"
	I0929 11:30:50.508942  317523 cri.go:89] found id: "67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc"
	I0929 11:30:50.508945  317523 cri.go:89] found id: "ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45"
	I0929 11:30:50.508948  317523 cri.go:89] found id: "84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939"
	I0929 11:30:50.508950  317523 cri.go:89] found id: "0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d"
	I0929 11:30:50.508953  317523 cri.go:89] found id: ""
	I0929 11:30:50.509001  317523 ssh_runner.go:195] Run: sudo runc list -f json
	I0929 11:30:50.531747  317523 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d/userdata","rootfs":"/var/lib/containers/storage/overlay/8a2d5074c9c82677731008fdb72e08c1a028aca1396c6940014b095b899552f9/merged","created":"2025-09-29T11:30:18.559089546Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termi
nationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.200195131Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1c8159cf-d9b8-4964-81a9-3a541a78ede1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1c8159cf-d9b8-4964-81a9-3a541a78ede1/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisio
ner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8a2d5074c9c82677731008fdb72e08c1a028aca1396c6940014b095b899552f9/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_1c8159cf-d9b8-4964-81a9-3a541a78ede1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1c8159cf-d9b8-4964-81a9-3a541a78ede1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/containers/storage-provisioner/2f72bbd8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1c8159cf-d9b8-4964-81a9-3a541a78ede1/volumes/kubernetes.io~projected/kube-api-access-w6jl5\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1c8159cf-d9b8-4964-81a9-3a541a78ede1","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-te
st\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-29T11:30:05.366114476Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41/userdata","rootfs":"/var/lib/containers/storage/overlay/be5375df1300e05d61645921900821993787e9fee53b63b0bd1010b2cb45ae54/merged","created":"2025-09-29T11:3
0:18.520786659Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41","io.kubern
etes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.350788084Z","io.kubernetes.cri-o.Image":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bda7994784646446b924dd8e7bf7821a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-686485_bda7994784646446b924dd8e7bf7821a/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/be5375df1300e05d61645921900821993787e9fee53b63b0bd1010b2cb45ae54/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-686485_kube-system_bda7994784646446b924
dd8e7bf7821a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-686485_kube-system_bda7994784646446b924dd8e7bf7821a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bda7994784646446b924dd8e7bf7821a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bda7994784646446b924dd8e7bf7821a/containers/etcd/ae993c10\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\"
:\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bda7994784646446b924dd8e7bf7821a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"bda7994784646446b924dd8e7bf7821a","kubernetes.io/config.seen":"2025-09-29T11:29:09.893762990Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc/userdata","rootfs":"/var/lib/containers/storage/overlay/053129d03
a764b917487f095b2ad152ce427b72e2dba748f41cb6591c4e94245/merged","created":"2025-09-29T11:30:18.497632792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}
","io.kubernetes.cri-o.ContainerID":"67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.264114424Z","io.kubernetes.cri-o.Image":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2f45a864dc9b5e76810cfe1e08ccba6d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-686485_2f45a864dc9b5e76810cfe1e08ccba6d/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/053129d03a764b917487f095b2ad152ce427b72e2dba748f41cb6591c4e94245/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-686485_kube-system_2f45a864dc9b5e76810cfe1e08ccba6d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-686485_kube-system_2f45a864dc9b5e76810cfe1e08ccba6d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\
":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2f45a864dc9b5e76810cfe1e08ccba6d/containers/kube-controller-manager/d414e2de\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2f45a864dc9b5e76810cfe1e08ccba6d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2f45a864dc9b5e76810cfe1e08ccba6d","kubernetes.io/config.hash":"2f45a864dc9b5e76810cfe1e08ccba6d","kubernetes.io/config.seen":"2025-09-29T11:29:09.893768791Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939/us
erdata","rootfs":"/var/lib/containers/storage/overlay/704e75912e674338dad40353adc855f787c4eaea09e8fe3036bed7bfe563f7e7/merged","created":"2025-09-29T11:30:18.396730965Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\
",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.210610796Z","io.kubernetes.cri-o.Image":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cafc9679f97aaf5bc65c227ee7fb3ea4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-686485_cafc9679f97aaf5bc65c227ee7fb3ea4/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/704e75912e674338dad40353adc855f787c4eaea09e8fe3036bed7bfe563f7e7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-686485_kube-system_cafc9679f97aaf5bc65c227ee7fb3ea4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-686485_kube-system_cafc9679f97aaf5bc65c227ee7fb3ea4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cafc9679f97aaf5bc65c227ee7fb3ea4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cafc9679f97aaf5bc65c227ee7fb3ea4/containers/kube-scheduler/a62d9265\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cafc9679f97aaf5bc65c227ee7fb3ea4","kubernetes.io/config.hash":"cafc9679f97aaf5bc65c227ee7fb3ea4","kubernetes.io/config.seen":"2025-09-29T11:29:09.893770136Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405/userdata","rootf
s":"/var/lib/containers/storage/overlay/ec874bcedf3d08b621825c9551a9b11f3e9eec1424bf1e1b6953b629686c9c4e/merged","created":"2025-09-29T11:30:18.643114721Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e2e56a4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e2e56a4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.32623131Z","io.kubernetes.cri-o.Image":"6fc3
2d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri-o.ImageRef":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-xs8dc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ec9ec386-a485-42d0-950e-56883d7a9f26\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-xs8dc_ec9ec386-a485-42d0-950e-56883d7a9f26/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ec874bcedf3d08b621825c9551a9b11f3e9eec1424bf1e1b6953b629686c9c4e/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-xs8dc_kube-system_ec9ec386-a485-42d0-950e-56883d7a9f26_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/85a762676d3f540a8
676b9caee1d08735895ed4d0abb409744afef1c63724770/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-xs8dc_kube-system_ec9ec386-a485-42d0-950e-56883d7a9f26_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-
950e-56883d7a9f26/containers/kube-proxy/ad7989fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ec9ec386-a485-42d0-950e-56883d7a9f26/volumes/kubernetes.io~projected/kube-api-access-tw2r7\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-xs8dc","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ec9ec386-a485-42d0-950e-56883d7a9f26","kubernetes.io/config.seen":"2025-09-29T11:29:23.902068916Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8","pid"
:0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8/userdata","rootfs":"/var/lib/containers/storage/overlay/720bee73aa666a1e41ba2c569c69ab631969eaa87d4f67584d5032b74416b632/merged","created":"2025-09-29T11:30:18.4994781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148
f297185273a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.317522528Z","io.kubernetes.cri-o.Image":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-btlb5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"45ae98fe-ca6f-4349-82af-33448daa0ce5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-btlb5_45ae98fe-ca6f-4349-82af-33448daa0ce5/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/720bee73aa666a1e41ba2c569c69ab631969eaa87d4f67584d5032b74416b632/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-
cni_kindnet-btlb5_kube-system_45ae98fe-ca6f-4349-82af-33448daa0ce5_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-btlb5_kube-system_45ae98fe-ca6f-4349-82af-33448daa0ce5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/etc-hosts\",\"rea
donly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/containers/kindnet-cni/3d7e20b1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/45ae98fe-ca6f-4349-82af-33448daa0ce5/volumes/kubernetes.io~projected/kube-api-access-hmn24\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-btlb5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"45ae98fe-ca6f-4349-82af-33448daa0ce5","kubernetes.io/config.seen":"2025-09-29T11:29:23.853287861Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e3f
7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745/userdata","rootfs":"/var/lib/containers/storage/overlay/745035ccbe63ced4b436c83165325f8df1f59901b82da7b738f5890cb2d09e8b/merged","created":"2025-09-29T11:30:18.510283773Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protoc
ol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.298619036Z","io.kubernetes.cri-o.Image":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-686485\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8a00df89b1e084f8ad2ff0b1b29ca855\"}","io.kubernetes.cri-o.Log
Path":"/var/log/pods/kube-system_kube-apiserver-functional-686485_8a00df89b1e084f8ad2ff0b1b29ca855/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/745035ccbe63ced4b436c83165325f8df1f59901b82da7b738f5890cb2d09e8b/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-686485_kube-system_8a00df89b1e084f8ad2ff0b1b29ca855_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9af55b288172885384bc186e9a1e96d3a810c5bb9b058d23400ba9c96162ea85","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-686485_kube-system_8a00df89b1e084f8ad2ff0b1b29ca855_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri
-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8a00df89b1e084f8ad2ff0b1b29ca855/containers/kube-apiserver/bdc7c875\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8a00df89b1e084f8ad2ff0b1b29ca855/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path
\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-686485","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8a00df89b1e084f8ad2ff0b1b29ca855","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"8a00df89b1e084f8ad2ff0b1b29ca855","kubernetes.io/config.seen":"2025-09-29T11:29:09.893767043Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45/userdata","rootfs":"/var/lib/containers/storage/overlay/74a388207b65601ff06a6dbce7a1121de4a8d23b1d89aca83c186c241c989bd7/merged","created":"2025-09-29
T11:30:18.515037221Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"conta
inerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-29T11:30:18.24718286Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","io.kubernetes.cri-o.Labels
":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-fcmb4\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-fcmb4_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/74a388207b65601ff06a6dbce7a1121de4a8d23b1d89aca83c186c241c989bd7/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-fcmb4_kube-system_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-fcmb4_kube-system
_a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/containers/coredns/952368c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0/volumes/kubernetes.io~project
ed/kube-api-access-g8mr4\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-fcmb4","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0","kubernetes.io/config.seen":"2025-09-29T11:30:05.373946526Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0929 11:30:50.532424  317523 cri.go:126] list returned 8 containers
	I0929 11:30:50.532433  317523 cri.go:129] container: {ID:0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d Status:stopped}
	I0929 11:30:50.532445  317523 cri.go:135] skipping {0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532452  317523 cri.go:129] container: {ID:60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 Status:stopped}
	I0929 11:30:50.532458  317523 cri.go:135] skipping {60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532464  317523 cri.go:129] container: {ID:67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc Status:stopped}
	I0929 11:30:50.532468  317523 cri.go:135] skipping {67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532473  317523 cri.go:129] container: {ID:84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 Status:stopped}
	I0929 11:30:50.532477  317523 cri.go:135] skipping {84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532482  317523 cri.go:129] container: {ID:a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 Status:stopped}
	I0929 11:30:50.532486  317523 cri.go:135] skipping {a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532491  317523 cri.go:129] container: {ID:ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 Status:stopped}
	I0929 11:30:50.532496  317523 cri.go:135] skipping {ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532501  317523 cri.go:129] container: {ID:e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 Status:stopped}
	I0929 11:30:50.532509  317523 cri.go:135] skipping {e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532513  317523 cri.go:129] container: {ID:ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 Status:stopped}
	I0929 11:30:50.532517  317523 cri.go:135] skipping {ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 stopped}: state = "stopped", want "paused"
	I0929 11:30:50.532571  317523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:30:50.541227  317523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 11:30:50.541246  317523 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 11:30:50.541303  317523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 11:30:50.549684  317523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.550208  317523 kubeconfig.go:125] found "functional-686485" server: "https://192.168.49.2:8441"
	I0929 11:30:50.551699  317523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 11:30:50.560657  317523 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-29 11:29:00.664822374 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-29 11:30:49.795791580 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0929 11:30:50.560678  317523 kubeadm.go:1152] stopping kube-system containers ...
	I0929 11:30:50.560690  317523 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 11:30:50.560741  317523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:30:50.601767  317523 cri.go:89] found id: "60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41"
	I0929 11:30:50.601779  317523 cri.go:89] found id: "a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405"
	I0929 11:30:50.601791  317523 cri.go:89] found id: "ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8"
	I0929 11:30:50.601794  317523 cri.go:89] found id: "e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745"
	I0929 11:30:50.601798  317523 cri.go:89] found id: "67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc"
	I0929 11:30:50.601800  317523 cri.go:89] found id: "ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45"
	I0929 11:30:50.601803  317523 cri.go:89] found id: "84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939"
	I0929 11:30:50.601805  317523 cri.go:89] found id: "0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d"
	I0929 11:30:50.601807  317523 cri.go:89] found id: ""
	I0929 11:30:50.601812  317523 cri.go:252] Stopping containers: [60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d]
	I0929 11:30:50.601872  317523 ssh_runner.go:195] Run: which crictl
	I0929 11:30:50.605637  317523 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41 a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405 ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8 e3f7341ca3cd514db9a6f3522d84573019b8b8dbad7678847ae0cf66097a3745 67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45 84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939 0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d
	I0929 11:30:50.679596  317523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 11:30:50.795127  317523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:30:50.803494  317523 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Sep 29 11:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Sep 29 11:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 29 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep 29 11:29 /etc/kubernetes/scheduler.conf
	
	I0929 11:30:50.803554  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0929 11:30:50.812162  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0929 11:30:50.820593  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.820649  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:30:50.829113  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0929 11:30:50.837696  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.837754  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:30:50.846272  317523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0929 11:30:50.855030  317523 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:30:50.855086  317523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:30:50.863890  317523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:30:50.872613  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:50.924801  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:54.786934  317523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.862106588s)
	I0929 11:30:54.786955  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:54.983284  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:55.059966  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:30:55.166220  317523 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:30:55.166302  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:55.666423  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:56.166829  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:30:56.193412  317523 api_server.go:72] duration metric: took 1.027207446s to wait for apiserver process to appear ...
	I0929 11:30:56.193426  317523 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:30:56.193445  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.312484  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:30:59.312503  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:30:59.312515  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.415351  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:30:59.415369  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:30:59.693702  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:30:59.704600  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:30:59.704617  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:00.197678  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:00.303451  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:31:00.303474  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:00.694213  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:00.734452  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:31:00.734480  317523 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:31:01.193704  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:01.201847  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 11:31:01.215530  317523 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:01.215547  317523 api_server.go:131] duration metric: took 5.022116368s to wait for apiserver health ...
	I0929 11:31:01.215556  317523 cni.go:84] Creating CNI manager for ""
	I0929 11:31:01.215562  317523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:31:01.219382  317523 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 11:31:01.222671  317523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 11:31:01.227111  317523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 11:31:01.227124  317523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 11:31:01.251706  317523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 11:31:01.860828  317523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:01.864488  317523 system_pods.go:59] 8 kube-system pods found
	I0929 11:31:01.864516  317523 system_pods.go:61] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:01.864523  317523 system_pods.go:61] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:01.864530  317523 system_pods.go:61] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:01.864537  317523 system_pods.go:61] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:01.864544  317523 system_pods.go:61] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:01.864549  317523 system_pods.go:61] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:01.864555  317523 system_pods.go:61] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:01.864559  317523 system_pods.go:61] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:01.864565  317523 system_pods.go:74] duration metric: took 3.725714ms to wait for pod list to return data ...
	I0929 11:31:01.864571  317523 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:01.867276  317523 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 11:31:01.867296  317523 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:01.867309  317523 node_conditions.go:105] duration metric: took 2.731246ms to run NodePressure ...
	I0929 11:31:01.867327  317523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:31:02.119636  317523 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 11:31:02.123297  317523 kubeadm.go:735] kubelet initialised
	I0929 11:31:02.123309  317523 kubeadm.go:736] duration metric: took 3.657888ms waiting for restarted kubelet to initialise ...
	I0929 11:31:02.123328  317523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:31:02.132141  317523 ops.go:34] apiserver oom_adj: -16
	I0929 11:31:02.132152  317523 kubeadm.go:593] duration metric: took 11.590901006s to restartPrimaryControlPlane
	I0929 11:31:02.132160  317523 kubeadm.go:394] duration metric: took 11.662881317s to StartCluster
	I0929 11:31:02.132174  317523 settings.go:142] acquiring lock: {Name:mk8da0e06d1edc552f3cec9ed26678491ca734d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:02.132236  317523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:31:02.132893  317523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/kubeconfig: {Name:mk84aa46812be3352ca2874bd06be6025c5058bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:31:02.133102  317523 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:31:02.133364  317523 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:31:02.133425  317523 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 11:31:02.133480  317523 addons.go:69] Setting storage-provisioner=true in profile "functional-686485"
	I0929 11:31:02.133496  317523 addons.go:238] Setting addon storage-provisioner=true in "functional-686485"
	W0929 11:31:02.133502  317523 addons.go:247] addon storage-provisioner should already be in state true
	I0929 11:31:02.133521  317523 host.go:66] Checking if "functional-686485" exists ...
	I0929 11:31:02.133568  317523 addons.go:69] Setting default-storageclass=true in profile "functional-686485"
	I0929 11:31:02.133576  317523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-686485"
	I0929 11:31:02.133839  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.134370  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.138530  317523 out.go:179] * Verifying Kubernetes components...
	I0929 11:31:02.142236  317523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:31:02.173913  317523 addons.go:238] Setting addon default-storageclass=true in "functional-686485"
	W0929 11:31:02.173925  317523 addons.go:247] addon default-storageclass should already be in state true
	I0929 11:31:02.173949  317523 host.go:66] Checking if "functional-686485" exists ...
	I0929 11:31:02.174385  317523 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
	I0929 11:31:02.177640  317523 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:31:02.180601  317523 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:02.180614  317523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:31:02.180705  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:31:02.215760  317523 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:02.215773  317523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:31:02.215839  317523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
	I0929 11:31:02.217945  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:31:02.242376  317523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
	I0929 11:31:02.369806  317523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:31:02.384778  317523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:31:02.390179  317523 node_ready.go:35] waiting up to 6m0s for node "functional-686485" to be "Ready" ...
	I0929 11:31:02.393699  317523 node_ready.go:49] node "functional-686485" is "Ready"
	I0929 11:31:02.393715  317523 node_ready.go:38] duration metric: took 3.517099ms for node "functional-686485" to be "Ready" ...
	I0929 11:31:02.393726  317523 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:31:02.393789  317523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:31:02.399304  317523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:31:02.573911  317523 api_server.go:72] duration metric: took 440.78555ms to wait for apiserver process to appear ...
	I0929 11:31:02.573923  317523 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:31:02.573939  317523 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0929 11:31:02.588884  317523 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0929 11:31:02.594488  317523 api_server.go:141] control plane version: v1.34.0
	I0929 11:31:02.594515  317523 api_server.go:131] duration metric: took 20.585874ms to wait for apiserver health ...
	I0929 11:31:02.594522  317523 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:31:02.599763  317523 system_pods.go:59] 8 kube-system pods found
	I0929 11:31:02.599781  317523 system_pods.go:61] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:02.599788  317523 system_pods.go:61] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:02.599794  317523 system_pods.go:61] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:02.599800  317523 system_pods.go:61] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:02.599828  317523 system_pods.go:61] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:02.599832  317523 system_pods.go:61] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:02.599838  317523 system_pods.go:61] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:02.599842  317523 system_pods.go:61] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:02.599846  317523 system_pods.go:74] duration metric: took 5.319727ms to wait for pod list to return data ...
	I0929 11:31:02.599852  317523 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:31:02.605743  317523 default_sa.go:45] found service account: "default"
	I0929 11:31:02.605758  317523 default_sa.go:55] duration metric: took 5.900761ms for default service account to be created ...
	I0929 11:31:02.605766  317523 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:31:02.609753  317523 system_pods.go:86] 8 kube-system pods found
	I0929 11:31:02.609772  317523 system_pods.go:89] "coredns-66bc5c9577-fcmb4" [a8bd6e44-4ec1-4cce-a86e-e5e0443b05b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:31:02.609782  317523 system_pods.go:89] "etcd-functional-686485" [19552840-e7cd-4a09-9260-e1abcd85c27b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:31:02.609786  317523 system_pods.go:89] "kindnet-btlb5" [45ae98fe-ca6f-4349-82af-33448daa0ce5] Running
	I0929 11:31:02.609792  317523 system_pods.go:89] "kube-apiserver-functional-686485" [cd3b4e28-0996-4298-8418-929c9cd9ed55] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:31:02.609797  317523 system_pods.go:89] "kube-controller-manager-functional-686485" [3fcbb27c-3670-400d-a621-597389444163] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:31:02.609800  317523 system_pods.go:89] "kube-proxy-xs8dc" [ec9ec386-a485-42d0-950e-56883d7a9f26] Running
	I0929 11:31:02.609805  317523 system_pods.go:89] "kube-scheduler-functional-686485" [433196c2-0db6-4147-88ec-8056af3d2962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:31:02.609809  317523 system_pods.go:89] "storage-provisioner" [1c8159cf-d9b8-4964-81a9-3a541a78ede1] Running
	I0929 11:31:02.609815  317523 system_pods.go:126] duration metric: took 4.043779ms to wait for k8s-apps to be running ...
	I0929 11:31:02.609821  317523 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:31:02.609878  317523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:31:03.287576  317523 system_svc.go:56] duration metric: took 677.746921ms WaitForService to wait for kubelet
	I0929 11:31:03.287590  317523 kubeadm.go:578] duration metric: took 1.154468727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:31:03.287604  317523 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:31:03.290620  317523 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 11:31:03.290950  317523 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 11:31:03.290963  317523 node_conditions.go:123] node cpu capacity is 2
	I0929 11:31:03.290973  317523 node_conditions.go:105] duration metric: took 3.364913ms to run NodePressure ...
	I0929 11:31:03.290984  317523 start.go:241] waiting for startup goroutines ...
	I0929 11:31:03.293538  317523 addons.go:514] duration metric: took 1.160125897s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 11:31:03.293562  317523 start.go:246] waiting for cluster config update ...
	I0929 11:31:03.293572  317523 start.go:255] writing updated cluster config ...
	I0929 11:31:03.293859  317523 ssh_runner.go:195] Run: rm -f paused
	I0929 11:31:03.297484  317523 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:31:03.301294  317523 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fcmb4" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:31:05.307765  317523 pod_ready.go:104] pod "coredns-66bc5c9577-fcmb4" is not "Ready", error: <nil>
	I0929 11:31:06.306148  317523 pod_ready.go:94] pod "coredns-66bc5c9577-fcmb4" is "Ready"
	I0929 11:31:06.306162  317523 pod_ready.go:86] duration metric: took 3.004856058s for pod "coredns-66bc5c9577-fcmb4" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.308566  317523 pod_ready.go:83] waiting for pod "etcd-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.814314  317523 pod_ready.go:94] pod "etcd-functional-686485" is "Ready"
	I0929 11:31:06.814327  317523 pod_ready.go:86] duration metric: took 505.749759ms for pod "etcd-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:06.816604  317523 pod_ready.go:83] waiting for pod "kube-apiserver-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:31:08.822508  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	W0929 11:31:11.321360  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	W0929 11:31:13.322623  317523 pod_ready.go:104] pod "kube-apiserver-functional-686485" is not "Ready", error: <nil>
	I0929 11:31:13.822109  317523 pod_ready.go:94] pod "kube-apiserver-functional-686485" is "Ready"
	I0929 11:31:13.822122  317523 pod_ready.go:86] duration metric: took 7.00550579s for pod "kube-apiserver-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:13.824337  317523 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.330269  317523 pod_ready.go:94] pod "kube-controller-manager-functional-686485" is "Ready"
	I0929 11:31:14.330285  317523 pod_ready.go:86] duration metric: took 505.935533ms for pod "kube-controller-manager-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.332558  317523 pod_ready.go:83] waiting for pod "kube-proxy-xs8dc" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.337087  317523 pod_ready.go:94] pod "kube-proxy-xs8dc" is "Ready"
	I0929 11:31:14.337101  317523 pod_ready.go:86] duration metric: took 4.530992ms for pod "kube-proxy-xs8dc" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.339291  317523 pod_ready.go:83] waiting for pod "kube-scheduler-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.420586  317523 pod_ready.go:94] pod "kube-scheduler-functional-686485" is "Ready"
	I0929 11:31:14.420601  317523 pod_ready.go:86] duration metric: took 81.297139ms for pod "kube-scheduler-functional-686485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:31:14.420610  317523 pod_ready.go:40] duration metric: took 11.123107056s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:31:14.480677  317523 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 11:31:14.483743  317523 out.go:179] * Done! kubectl is now configured to use "functional-686485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:32:10 functional-686485 crio[4144]: time="2025-09-29 11:32:10.438627414Z" level=info msg="Image docker.io/nginx:alpine not found" id=3780555b-b14c-4a90-baa1-c43614436573 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:25 functional-686485 crio[4144]: time="2025-09-29 11:32:25.205542123Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=86477bda-099e-4e1c-aa3f-f16f4ee0f0b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:25 functional-686485 crio[4144]: time="2025-09-29 11:32:25.205771432Z" level=info msg="Image docker.io/nginx:alpine not found" id=86477bda-099e-4e1c-aa3f-f16f4ee0f0b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:39 functional-686485 crio[4144]: time="2025-09-29 11:32:39.891230304Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=45e1adf5-6875-4849-a91b-2b9b88ec5142 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:32:39 functional-686485 crio[4144]: time="2025-09-29 11:32:39.893145825Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 11:33:10 functional-686485 crio[4144]: time="2025-09-29 11:33:10.159926852Z" level=info msg="Pulling image: docker.io/nginx:latest" id=645816fe-da16-4643-8399-294704cf5939 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:33:10 functional-686485 crio[4144]: time="2025-09-29 11:33:10.162137939Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 11:33:23 functional-686485 crio[4144]: time="2025-09-29 11:33:23.205918917Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=979bb8d9-4cfa-47c3-a99d-b40b6736e3d9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:33:23 functional-686485 crio[4144]: time="2025-09-29 11:33:23.206137406Z" level=info msg="Image docker.io/nginx:alpine not found" id=979bb8d9-4cfa-47c3-a99d-b40b6736e3d9 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:33:38 functional-686485 crio[4144]: time="2025-09-29 11:33:38.204784804Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d023e3a9-7281-45d9-8d14-7447ead2e35d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:33:38 functional-686485 crio[4144]: time="2025-09-29 11:33:38.205027160Z" level=info msg="Image docker.io/nginx:alpine not found" id=d023e3a9-7281-45d9-8d14-7447ead2e35d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:33:40 functional-686485 crio[4144]: time="2025-09-29 11:33:40.431192894Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=d4a12710-80a8-42d8-868b-82fdcaa24034 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:33:40 functional-686485 crio[4144]: time="2025-09-29 11:33:40.436360894Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 29 11:34:40 functional-686485 crio[4144]: time="2025-09-29 11:34:40.927399542Z" level=info msg="Pulling image: docker.io/nginx:latest" id=e4cd77f8-f9a4-489e-b7c6-310e37058c70 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:34:40 functional-686485 crio[4144]: time="2025-09-29 11:34:40.929507298Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 29 11:34:51 functional-686485 crio[4144]: time="2025-09-29 11:34:51.204544281Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=08c56603-9c26-47a7-a475-d4c23a3f12e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:34:51 functional-686485 crio[4144]: time="2025-09-29 11:34:51.204803031Z" level=info msg="Image docker.io/nginx:alpine not found" id=08c56603-9c26-47a7-a475-d4c23a3f12e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:05 functional-686485 crio[4144]: time="2025-09-29 11:35:05.205302784Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2693fbb4-dd89-45dc-81b6-636fd82fb04a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:05 functional-686485 crio[4144]: time="2025-09-29 11:35:05.205575589Z" level=info msg="Image docker.io/nginx:alpine not found" id=2693fbb4-dd89-45dc-81b6-636fd82fb04a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:17 functional-686485 crio[4144]: time="2025-09-29 11:35:17.204648327Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=13dac6da-64a9-400d-aff9-65caf4c11462 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:17 functional-686485 crio[4144]: time="2025-09-29 11:35:17.204869943Z" level=info msg="Image docker.io/nginx:alpine not found" id=13dac6da-64a9-400d-aff9-65caf4c11462 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:30 functional-686485 crio[4144]: time="2025-09-29 11:35:30.204933748Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=337f0839-7e0d-4680-8630-948f459d719d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:30 functional-686485 crio[4144]: time="2025-09-29 11:35:30.205171527Z" level=info msg="Image docker.io/nginx:alpine not found" id=337f0839-7e0d-4680-8630-948f459d719d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:35:30 functional-686485 crio[4144]: time="2025-09-29 11:35:30.205935521Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=1849c0ba-fc0f-4e4e-b163-90f8be0c271f name=/runtime.v1.ImageService/PullImage
	Sep 29 11:35:30 functional-686485 crio[4144]: time="2025-09-29 11:35:30.208136996Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0acab5d004ce0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	206bb1f896aa2       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf   4 minutes ago       Running             kube-proxy                2                   85a762676d3f5       kube-proxy-xs8dc
	5832399e5aad8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   b5e389b82b519       kindnet-btlb5
	e8c3e33f13e06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   2d780174ae1a9       storage-provisioner
	6b45c8de9c4ce       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570   4 minutes ago       Running             kube-controller-manager   2                   bae0a8024391e       kube-controller-manager-functional-686485
	7e02997c4a169       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be   4 minutes ago       Running             kube-apiserver            0                   3924fd9382104       kube-apiserver-functional-686485
	7965a1dedcfc2       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee   4 minutes ago       Running             kube-scheduler            2                   1b1e5f3189429       kube-scheduler-functional-686485
	6de9008a9f773       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   4 minutes ago       Running             etcd                      2                   48d4a4927f8d9       etcd-functional-686485
	60e8786739776       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   5 minutes ago       Exited              etcd                      1                   48d4a4927f8d9       etcd-functional-686485
	a43e6af95e6d5       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf   5 minutes ago       Exited              kube-proxy                1                   85a762676d3f5       kube-proxy-xs8dc
	ad4e2c1e43fa8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Exited              kindnet-cni               1                   b5e389b82b519       kindnet-btlb5
	67e39ed141cbe       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570   5 minutes ago       Exited              kube-controller-manager   1                   bae0a8024391e       kube-controller-manager-functional-686485
	ff16b9dcb68ef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Exited              coredns                   1                   c3fb257895e24       coredns-66bc5c9577-fcmb4
	84cae6a2613ef       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee   5 minutes ago       Exited              kube-scheduler            1                   1b1e5f3189429       kube-scheduler-functional-686485
	0468e3a72325e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   2d780174ae1a9       storage-provisioner
	
	
	==> coredns [0acab5d004ce042926912ef6b546568fdb5f73d8e9af6c1bb44c31d95c375308] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34138 - 924 "HINFO IN 3820287925151504037.9173467966324991986. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026454485s
	
	
	==> coredns [ff16b9dcb68ef8bc82155b6640c0653bf0b528f4aad371a90d6c15f0d549aa45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44113 - 5213 "HINFO IN 332428050901775543.5573564185816815509. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031089405s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-686485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-686485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-686485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_29_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:29:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-686485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:35:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:35:15 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:35:15 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:35:15 +0000   Mon, 29 Sep 2025 11:29:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:35:15 +0000   Mon, 29 Sep 2025 11:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-686485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8f830ab79054939be3711c0d700e16a
	  System UUID:                bca24e78-d01e-4d6f-bf99-8242d437899c
	  Boot ID:                    3ea59072-b9ed-4996-bd90-d451fda04a88
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-66bc5c9577-fcmb4                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m9s
	  kube-system                 etcd-functional-686485                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m14s
	  kube-system                 kindnet-btlb5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m9s
	  kube-system                 kube-apiserver-functional-686485             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-controller-manager-functional-686485    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-xs8dc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-functional-686485             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m7s                   kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 5m9s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m22s (x9 over 6m23s)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m22s (x8 over 6m23s)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m22s (x7 over 6m23s)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     6m14s                  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m14s                  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m14s                  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m10s                  node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   NodeReady                5m27s                  kubelet          Node functional-686485 status is now: NodeReady
	  Normal   RegisteredNode           5m7s                   node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	  Normal   Starting                 4m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node functional-686485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node functional-686485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x8 over 4m37s)  kubelet          Node functional-686485 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m30s                  node-controller  Node functional-686485 event: Registered Node functional-686485 in Controller
	
	
	==> dmesg <==
	[Sep29 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.515134] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034601] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.790647] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.751861] kauditd_printk_skb: 36 callbacks suppressed
	[Sep29 10:36] hrtimer: interrupt took 21542036 ns
	[Sep29 11:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [60e8786739776f5566ee6b8e4cfa5f9ff4c1558be034258cd6861a90c645db41] <==
	{"level":"warn","ts":"2025-09-29T11:30:21.021527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.037463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.061168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.084023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.111050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.134277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:21.296547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51802","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:30:41.875243Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:30:41.875311Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:30:41.875410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.014972Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:30:42.016499Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016561Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016615Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016625Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.016589Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:30:42.016663Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:30:42.016688Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016768Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:30:42.016813Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:30:42.016851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020537Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:30:42.020617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:30:42.020650Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:30:42.020658Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-686485","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [6de9008a9f773874f4e141a751358ca0571fe1f41170f35bc9f8f40c67ba6e9b] <==
	{"level":"warn","ts":"2025-09-29T11:30:57.915432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:57.933466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:57.947992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:57.971780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:57.985521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.005551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.022642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.034708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.053829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.076395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.101099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.111791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.125275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.145141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.166473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.177241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.199870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.213274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.235431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.313116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.317723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.355398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.392479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.443466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:30:58.543845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:35:33 up  1:18,  0 users,  load average: 0.29, 1.04, 2.00
	Linux functional-686485 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5832399e5aad8fff21002574d904a6ff3227773daa9dd3dd491dbc401fa6c427] <==
	I0929 11:33:31.029538       1 main.go:301] handling current node
	I0929 11:33:41.030986       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:33:41.031025       1 main.go:301] handling current node
	I0929 11:33:51.036377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:33:51.036411       1 main.go:301] handling current node
	I0929 11:34:01.030148       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:01.030188       1 main.go:301] handling current node
	I0929 11:34:11.029689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:11.029725       1 main.go:301] handling current node
	I0929 11:34:21.037604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:21.037745       1 main.go:301] handling current node
	I0929 11:34:31.032422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:31.032459       1 main.go:301] handling current node
	I0929 11:34:41.032432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:41.032470       1 main.go:301] handling current node
	I0929 11:34:51.038195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:34:51.038232       1 main.go:301] handling current node
	I0929 11:35:01.029964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:35:01.030081       1 main.go:301] handling current node
	I0929 11:35:11.031540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:35:11.031582       1 main.go:301] handling current node
	I0929 11:35:21.032369       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:35:21.032490       1 main.go:301] handling current node
	I0929 11:35:31.030218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:35:31.030258       1 main.go:301] handling current node
	
	
	==> kindnet [ad4e2c1e43fa84d519bf302d84e38ee953b3a92480982d31b148f297185273a8] <==
	I0929 11:30:18.823322       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 11:30:18.828165       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 11:30:18.829360       1 main.go:148] setting mtu 1500 for CNI 
	I0929 11:30:18.830669       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 11:30:18.830833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T11:30:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 11:30:19.107517       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 11:30:19.107544       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 11:30:19.107553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 11:30:19.107857       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0929 11:30:23.208854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 11:30:23.208959       1 metrics.go:72] Registering metrics
	I0929 11:30:23.209067       1 controller.go:711] "Syncing nftables rules"
	I0929 11:30:29.106961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:29.107024       1 main.go:301] handling current node
	I0929 11:30:39.107229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:39.107260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e02997c4a169c16aa505782e2302d588dd8856611e6e53513deba2f5708373a] <==
	I0929 11:30:59.515044       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:30:59.515084       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0929 11:30:59.515091       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 11:30:59.515213       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0929 11:30:59.519859       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 11:30:59.525157       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0929 11:30:59.527893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 11:31:00.319561       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 11:31:00.323214       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:31:01.853808       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:31:01.974992       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:31:02.050924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:31:02.059675       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:31:03.091581       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:31:03.136927       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:31:03.250641       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:31:18.308776       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.196.42"}
	I0929 11:31:24.676930       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.121.196"}
	I0929 11:31:57.661532       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:32:23.679655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:33:05.149033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:33:28.646220       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:12.334621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:48.731415       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:23.156411       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [67e39ed141cbe1b5a0effc98f01cae9cff95af66bad0c3a3cbd85825d2f187dc] <==
	I0929 11:30:25.542283       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:30:25.542423       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:30:25.542497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:30:25.542603       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-686485"
	I0929 11:30:25.542713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.542747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:30:25.542775       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:30:25.543066       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:30:25.548027       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:30:25.553859       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:30:25.557143       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:30:25.560371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:30:25.562633       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:30:25.564920       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:30:25.567147       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:30:25.569765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:30:25.573397       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:30:25.577340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:30:25.583473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:30:25.583515       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:30:25.584345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:30:25.584416       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:30:25.593361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:25.600470       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:30:25.602738       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [6b45c8de9c4ceaccdd57cc9b24372eb3e9939690f47613e29be4b40cd51089ef] <==
	I0929 11:31:02.858832       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:31:02.858844       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 11:31:02.866795       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:31:02.858745       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:31:02.886970       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:31:02.887132       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-686485"
	I0929 11:31:02.887246       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:31:02.866846       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:31:02.866874       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:31:02.866887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:31:02.870049       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:31:02.885200       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:31:02.892593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.889045       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:31:02.885222       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:31:02.889059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 11:31:02.889070       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 11:31:02.889086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:31:02.889097       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 11:31:02.893705       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:31:02.900998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:02.967539       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:31:03.027846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:31:03.027900       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [206bb1f896aa2f49fa4a9317779250e75cb2dcb01e984f74509e7b0c53120a9f] <==
	I0929 11:31:00.851542       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:31:00.945683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:01.047150       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:01.047194       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:31:01.047260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:01.155283       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:31:01.155348       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:01.159905       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:01.160233       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:01.160258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:01.161823       1 config.go:200] "Starting service config controller"
	I0929 11:31:01.161842       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:01.161860       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:01.161865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:01.161876       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:01.161881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:01.162622       1 config.go:309] "Starting node config controller"
	I0929 11:31:01.162640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:01.162648       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:01.262759       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:01.262805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:31:01.262852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a43e6af95e6d5a46e6421145c9c2772ae45879bde1fe5ea4832488590e971405] <==
	I0929 11:30:23.555511       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:30:23.722149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:30:23.823478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:30:23.836615       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:30:23.836768       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:30:23.890249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:30:23.890411       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:30:23.895038       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:30:23.895383       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:30:23.895532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:23.896771       1 config.go:200] "Starting service config controller"
	I0929 11:30:23.896829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:30:23.896869       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:30:23.896900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:30:23.896940       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:30:23.896967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:30:23.897640       1 config.go:309] "Starting node config controller"
	I0929 11:30:23.897704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:30:23.897749       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:30:23.997604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:30:23.997652       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:30:23.997666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965a1dedcfc222b178cf7fa524081fbc78a93dd33338864f2d39c59fa5a3fe3] <==
	I0929 11:30:58.505418       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:30:59.404852       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:30:59.404958       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:30:59.404994       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:30:59.405035       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:30:59.453738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:59.453770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:59.455978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456059       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:59.456368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:59.456639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:59.560470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [84cae6a2613ef648042633246698c0cda172e2a6a86994d4bdbc6b04e5655939] <==
	I0929 11:30:21.463798       1 serving.go:386] Generated self-signed cert in-memory
	I0929 11:30:24.008390       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:30:24.008422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:30:24.013863       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:30:24.013954       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 11:30:24.013977       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.014007       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:30:24.016191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016218       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:24.016235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.016240       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.114075       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 11:30:24.116806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:24.116889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877802       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:30:41.877832       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:30:41.877893       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:30:41.877933       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:30:41.877964       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:30:41.879564       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0929 11:30:41.880048       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:30:41.880155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.298286    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Error finding container b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe: Status 404 returned error can't find the container with id b5e389b82b51934113ba77f5c756a240918f9654e40f0565d6f05ff8c08cf8fe
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.298490    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Error finding container 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Status 404 returned error can't find the container with id 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.298725    4481 manager.go:1116] Failed to create existing container: /crio-1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Error finding container 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7: Status 404 returned error can't find the container with id 1b1e5f318942912e76f204852f1c02c5fd8afd0bd1c4d71e37f5db011105e5b7
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.299000    4481 manager.go:1116] Failed to create existing container: /crio-c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Error finding container c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Status 404 returned error can't find the container with id c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.299280    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Error finding container bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e: Status 404 returned error can't find the container with id bae0a8024391e69a2a27c5931e415e57eb55e09662e1ac5e00d1ecf3484a5f0e
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.299595    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Error finding container 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770: Status 404 returned error can't find the container with id 85a762676d3f540a8676b9caee1d08735895ed4d0abb409744afef1c63724770
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.299863    4481 manager.go:1116] Failed to create existing container: /crio-49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Error finding container 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a: Status 404 returned error can't find the container with id 49b2858d49f0af6a8229d89b9f5a15337768fbb7144b7ad5ad1486bc8ff34b0a
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.300129    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Error finding container 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639: Status 404 returned error can't find the container with id 2d780174ae1a92af7de2359ee16863ad10af89d0586ac0539cd864e4a91be639
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.300920    4481 manager.go:1116] Failed to create existing container: /docker/94cef4d5f9be96f5ccf28457364aaa9ce5a15b3b1dda8c0163c916fff747b5c7/crio-c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Error finding container c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4: Status 404 returned error can't find the container with id c3fb257895e2407b0f5aa499c491f722d04d6a7e8bb4bdc2370587af68424fb4
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.301148    4481 manager.go:1116] Failed to create existing container: /crio-48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Error finding container 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d: Status 404 returned error can't find the container with id 48d4a4927f8d986b052bf4e9441d5c8f29bf9bf5d1c554a2feb1ff79b71c1b7d
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.333894    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145695333647863 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:34:55 functional-686485 kubelet[4481]: E0929 11:34:55.333943    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145695333647863 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:05 functional-686485 kubelet[4481]: E0929 11:35:05.207046    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:35:05 functional-686485 kubelet[4481]: E0929 11:35:05.335810    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145705335121708 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:05 functional-686485 kubelet[4481]: E0929 11:35:05.335844    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145705335121708 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:11 functional-686485 kubelet[4481]: E0929 11:35:11.209790    4481 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:35:11 functional-686485 kubelet[4481]: E0929 11:35:11.209852    4481 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:35:11 functional-686485 kubelet[4481]: E0929 11:35:11.209926    4481 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(3f43496a-bd99-48c7-a4c6-2775c0a9ffc0): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:35:11 functional-686485 kubelet[4481]: E0929 11:35:11.209962    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	Sep 29 11:35:15 functional-686485 kubelet[4481]: E0929 11:35:15.337694    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145715337463257 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:15 functional-686485 kubelet[4481]: E0929 11:35:15.337729    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145715337463257 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:17 functional-686485 kubelet[4481]: E0929 11:35:17.205441    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974"
	Sep 29 11:35:25 functional-686485 kubelet[4481]: E0929 11:35:25.339009    4481 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145725338743487 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:25 functional-686485 kubelet[4481]: E0929 11:35:25.339048    4481 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145725338743487 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:157169} inodes_used:{value:77}}"
	Sep 29 11:35:26 functional-686485 kubelet[4481]: E0929 11:35:26.204476    4481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3f43496a-bd99-48c7-a4c6-2775c0a9ffc0"
	
	
	==> storage-provisioner [0468e3a72325e797e16c91365312b2ec13f8d3ce7c366c5de9f59b81ea1d370d] <==
	I0929 11:30:19.697307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:30:23.184671       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:30:23.184721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:30:23.202638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:26.676372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:30.936415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:34.535393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:37.589413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.611236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.618353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.618613       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:30:40.618782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33ea9649-429c-4699-8969-249f1f9741d0", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843 became leader
	I0929 11:30:40.618838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	W0929 11:30:40.620931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:40.626871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:30:40.721671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-686485_b4c49bc4-1e05-4be1-82e5-7d48a5f3f843!
	
	
	==> storage-provisioner [e8c3e33f13e06057074796f0984e1abe6593c9a7d5cf652efcac19bcbcd63795] <==
	W0929 11:35:09.372775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:11.375622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:11.380383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:13.383668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:13.387912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:15.391085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:15.397367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:17.401095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:17.405638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:19.408708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:19.416106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:21.418485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:21.422760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:23.426475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:23.430837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:25.433957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:25.438184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:27.441177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:27.447915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:29.450335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:29.454473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:31.464588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:31.477064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:33.482667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:35:33.498254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
helpers_test.go:269: (dbg) Run:  kubectl --context functional-686485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-686485 describe pod nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-686485 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:24 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vslmk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vslmk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m9s                 default-scheduler  Successfully assigned default/nginx-svc to functional-686485
	  Warning  Failed     3m25s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m24s                kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s (x3 over 3m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    17s (x5 over 3m24s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s (x5 over 3m24s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x4 over 4m9s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-686485/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qtcb2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qtcb2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-686485
	  Normal   Pulling    88s (x3 over 4m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     23s (x3 over 2m55s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     23s (x3 over 2m55s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x3 over 2m54s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x3 over 2m54s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-686485 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0e76e3fd-a5fb-4ed9-9d4c-081d08f4e974] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-29 11:35:25.000099669 +0000 UTC m=+909.797872469
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-686485 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-686485 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-686485/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:31:24 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vslmk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vslmk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-686485
Warning  Failed     3m16s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m15s                kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    107s (x3 over 4m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     45s (x3 over 3m16s)  kubelet            Error: ErrImagePull
Warning  Failed     45s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    8s (x5 over 3m15s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     8s (x5 over 3m15s)   kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-686485 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-686485 logs nginx-svc -n default: exit status 1 (106.990963ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-686485 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0929 11:35:25.205070  294425 retry.go:31] will retry after 2.822762438s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:35:28.028974  294425 retry.go:31] will retry after 5.4327135s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-686485 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.99.121.196   10.99.121.196   80:30133/TCP   5m52s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-686485 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-686485 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jc96t" [dfb2492f-5d90-4b4b-a067-e71679a1b43c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 11:38:25.872872  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:43:25.872944  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:44:48.944918  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-686485 -n functional-686485
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 11:47:17.239986566 +0000 UTC m=+1622.037759349
functional_test.go:1460: (dbg) Run:  kubectl --context functional-686485 describe po hello-node-75c85bcc94-jc96t -n default
functional_test.go:1460: (dbg) kubectl --context functional-686485 describe po hello-node-75c85bcc94-jc96t -n default:
Name:             hello-node-75c85bcc94-jc96t
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-686485/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:37:16 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcj6k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lcj6k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jc96t to functional-686485
Normal   Pulling    6m36s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m7s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m7s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x16 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m40s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-686485 logs hello-node-75c85bcc94-jc96t -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-686485 logs hello-node-75c85bcc94-jc96t -n default: exit status 1 (99.078491ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-jc96t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-686485 logs hello-node-75c85bcc94-jc96t -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 service --namespace=default --https --url hello-node: exit status 115 (393.087282ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31016
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-686485 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 service hello-node --url --format={{.IP}}: exit status 115 (394.94603ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-686485 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 service hello-node --url: exit status 115 (390.594656ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31016
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-686485 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31016
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (939.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0929 12:38:36.291974  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:45.960409  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:45.966759  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:45.978110  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:45.999642  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:46.041672  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:46.123508  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:46.285615  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:46.608268  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:47.249611  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:48.531623  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:51.093742  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:38:56.215077  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:39:06.456512  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:39:26.938284  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m39.470735046s)

                                                
                                                
-- stdout --
	* [calico-800992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-800992" primary control-plane node in "calico-800992" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:38:29.961526  527688 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:38:29.962083  527688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:38:29.962119  527688 out.go:374] Setting ErrFile to fd 2...
	I0929 12:38:29.962140  527688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:38:29.962433  527688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 12:38:29.962909  527688 out.go:368] Setting JSON to false
	I0929 12:38:29.963855  527688 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8461,"bootTime":1759141049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 12:38:29.963947  527688 start.go:140] virtualization:  
	I0929 12:38:29.969688  527688 out.go:179] * [calico-800992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 12:38:29.973109  527688 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 12:38:29.973189  527688 notify.go:220] Checking for updates...
	I0929 12:38:29.979745  527688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:38:29.983068  527688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 12:38:29.986296  527688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 12:38:29.989431  527688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 12:38:29.992688  527688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:38:29.996242  527688 config.go:182] Loaded profile config "kindnet-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:38:29.996492  527688 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:38:30.035487  527688 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 12:38:30.035637  527688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:38:30.155769  527688 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 12:38:30.14007963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 12:38:30.155877  527688 docker.go:318] overlay module found
	I0929 12:38:30.159236  527688 out.go:179] * Using the docker driver based on user configuration
	I0929 12:38:30.162151  527688 start.go:304] selected driver: docker
	I0929 12:38:30.162168  527688 start.go:924] validating driver "docker" against <nil>
	I0929 12:38:30.162183  527688 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:38:30.162953  527688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:38:30.270163  527688 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 12:38:30.257585773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 12:38:30.270342  527688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:38:30.270575  527688 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:38:30.273660  527688 out.go:179] * Using Docker driver with root privileges
	I0929 12:38:30.276435  527688 cni.go:84] Creating CNI manager for "calico"
	I0929 12:38:30.276464  527688 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 12:38:30.276559  527688 start.go:348] cluster config:
	{Name:calico-800992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-800992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0929 12:38:30.279452  527688 out.go:179] * Starting "calico-800992" primary control-plane node in "calico-800992" cluster
	I0929 12:38:30.282210  527688 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 12:38:30.285035  527688 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:38:30.287786  527688 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:38:30.287837  527688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 12:38:30.287845  527688 cache.go:58] Caching tarball of preloaded images
	I0929 12:38:30.287937  527688 preload.go:172] Found /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0929 12:38:30.287955  527688 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 12:38:30.288073  527688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/config.json ...
	I0929 12:38:30.288090  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/config.json: {Name:mk29893dd54c41fbfbf3a5b7e2b505b1707ba127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:30.288248  527688 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:38:30.323466  527688 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:38:30.323485  527688 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:38:30.323498  527688 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:38:30.323520  527688 start.go:360] acquireMachinesLock for calico-800992: {Name:mk1e8dd457165a857328587158b5b266845280e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:38:30.324171  527688 start.go:364] duration metric: took 631.032µs to acquireMachinesLock for "calico-800992"
	I0929 12:38:30.324212  527688 start.go:93] Provisioning new machine with config: &{Name:calico-800992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-800992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:38:30.324328  527688 start.go:125] createHost starting for "" (driver="docker")
	I0929 12:38:30.327724  527688 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 12:38:30.327956  527688 start.go:159] libmachine.API.Create for "calico-800992" (driver="docker")
	I0929 12:38:30.327984  527688 client.go:168] LocalClient.Create starting
	I0929 12:38:30.328058  527688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem
	I0929 12:38:30.328091  527688 main.go:141] libmachine: Decoding PEM data...
	I0929 12:38:30.328104  527688 main.go:141] libmachine: Parsing certificate...
	I0929 12:38:30.328167  527688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem
	I0929 12:38:30.328184  527688 main.go:141] libmachine: Decoding PEM data...
	I0929 12:38:30.328194  527688 main.go:141] libmachine: Parsing certificate...
	I0929 12:38:30.328597  527688 cli_runner.go:164] Run: docker network inspect calico-800992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 12:38:30.357862  527688 cli_runner.go:211] docker network inspect calico-800992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 12:38:30.357951  527688 network_create.go:284] running [docker network inspect calico-800992] to gather additional debugging logs...
	I0929 12:38:30.357969  527688 cli_runner.go:164] Run: docker network inspect calico-800992
	W0929 12:38:30.377940  527688 cli_runner.go:211] docker network inspect calico-800992 returned with exit code 1
	I0929 12:38:30.377976  527688 network_create.go:287] error running [docker network inspect calico-800992]: docker network inspect calico-800992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-800992 not found
	I0929 12:38:30.377990  527688 network_create.go:289] output of [docker network inspect calico-800992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-800992 not found
	
	** /stderr **
	I0929 12:38:30.378090  527688 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:38:30.398278  527688 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-412f8ec3d590 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:74:dd:ee:56:f0} reservation:<nil>}
	I0929 12:38:30.398759  527688 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8a5161e2587d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:49:3a:4c:6a:f3} reservation:<nil>}
	I0929 12:38:30.399014  527688 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1fe1165317b5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:76:8e:42:4a:0a} reservation:<nil>}
	I0929 12:38:30.399419  527688 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7170}
	I0929 12:38:30.399438  527688 network_create.go:124] attempt to create docker network calico-800992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0929 12:38:30.399501  527688 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-800992 calico-800992
	I0929 12:38:30.472094  527688 network_create.go:108] docker network calico-800992 192.168.76.0/24 created
	I0929 12:38:30.472123  527688 kic.go:121] calculated static IP "192.168.76.2" for the "calico-800992" container
	I0929 12:38:30.472212  527688 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 12:38:30.489947  527688 cli_runner.go:164] Run: docker volume create calico-800992 --label name.minikube.sigs.k8s.io=calico-800992 --label created_by.minikube.sigs.k8s.io=true
	I0929 12:38:30.518070  527688 oci.go:103] Successfully created a docker volume calico-800992
	I0929 12:38:30.518276  527688 cli_runner.go:164] Run: docker run --rm --name calico-800992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-800992 --entrypoint /usr/bin/test -v calico-800992:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 12:38:31.179459  527688 oci.go:107] Successfully prepared a docker volume calico-800992
	I0929 12:38:31.179509  527688 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:38:31.179529  527688 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 12:38:31.179605  527688 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-800992:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 12:38:36.097481  527688 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v calico-800992:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.91782544s)
	I0929 12:38:36.097513  527688 kic.go:203] duration metric: took 4.917980054s to extract preloaded images to volume ...
	W0929 12:38:36.097664  527688 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 12:38:36.097828  527688 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 12:38:36.200846  527688 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-800992 --name calico-800992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-800992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-800992 --network calico-800992 --ip 192.168.76.2 --volume calico-800992:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 12:38:36.704300  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Running}}
	I0929 12:38:36.735688  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:38:36.769353  527688 cli_runner.go:164] Run: docker exec calico-800992 stat /var/lib/dpkg/alternatives/iptables
	I0929 12:38:36.851677  527688 oci.go:144] the created container "calico-800992" has a running status.
	I0929 12:38:36.851737  527688 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa...
	I0929 12:38:37.323849  527688 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 12:38:37.346551  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:38:37.373671  527688 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 12:38:37.373689  527688 kic_runner.go:114] Args: [docker exec --privileged calico-800992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 12:38:37.454594  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:38:37.485838  527688 machine.go:93] provisionDockerMachine start ...
	I0929 12:38:37.485925  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:37.512219  527688 main.go:141] libmachine: Using SSH client type: native
	I0929 12:38:37.512548  527688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0929 12:38:37.512558  527688 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:38:37.513312  527688 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42840->127.0.0.1:33478: read: connection reset by peer
	I0929 12:38:40.659567  527688 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-800992
	
	I0929 12:38:40.659606  527688 ubuntu.go:182] provisioning hostname "calico-800992"
	I0929 12:38:40.659695  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:40.696807  527688 main.go:141] libmachine: Using SSH client type: native
	I0929 12:38:40.697131  527688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0929 12:38:40.697148  527688 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-800992 && echo "calico-800992" | sudo tee /etc/hostname
	I0929 12:38:40.888931  527688 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-800992
	
	I0929 12:38:40.889082  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:40.913965  527688 main.go:141] libmachine: Using SSH client type: native
	I0929 12:38:40.914275  527688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0929 12:38:40.914295  527688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-800992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-800992/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-800992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:38:41.076605  527688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:38:41.076635  527688 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-292570/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-292570/.minikube}
	I0929 12:38:41.076714  527688 ubuntu.go:190] setting up certificates
	I0929 12:38:41.076728  527688 provision.go:84] configureAuth start
	I0929 12:38:41.076804  527688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-800992
	I0929 12:38:41.095817  527688 provision.go:143] copyHostCerts
	I0929 12:38:41.095913  527688 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem, removing ...
	I0929 12:38:41.095949  527688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem
	I0929 12:38:41.096058  527688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/ca.pem (1078 bytes)
	I0929 12:38:41.096201  527688 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem, removing ...
	I0929 12:38:41.096212  527688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem
	I0929 12:38:41.096244  527688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/cert.pem (1123 bytes)
	I0929 12:38:41.096435  527688 exec_runner.go:144] found /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem, removing ...
	I0929 12:38:41.096448  527688 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem
	I0929 12:38:41.096505  527688 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-292570/.minikube/key.pem (1675 bytes)
	I0929 12:38:41.096596  527688 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem org=jenkins.calico-800992 san=[127.0.0.1 192.168.76.2 calico-800992 localhost minikube]
	I0929 12:38:41.472770  527688 provision.go:177] copyRemoteCerts
	I0929 12:38:41.472843  527688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:38:41.472888  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:41.489903  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:38:41.589130  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:38:41.614080  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 12:38:41.638660  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:38:41.665746  527688 provision.go:87] duration metric: took 588.992763ms to configureAuth
	I0929 12:38:41.665775  527688 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:38:41.665970  527688 config.go:182] Loaded profile config "calico-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:38:41.666080  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:41.685270  527688 main.go:141] libmachine: Using SSH client type: native
	I0929 12:38:41.685767  527688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0929 12:38:41.685788  527688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 12:38:41.991035  527688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 12:38:41.991109  527688 machine.go:96] duration metric: took 4.505241655s to provisionDockerMachine
	I0929 12:38:41.991132  527688 client.go:171] duration metric: took 11.663141278s to LocalClient.Create
	I0929 12:38:41.991219  527688 start.go:167] duration metric: took 11.663262071s to libmachine.API.Create "calico-800992"
	I0929 12:38:41.991258  527688 start.go:293] postStartSetup for "calico-800992" (driver="docker")
	I0929 12:38:41.991299  527688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:38:41.991404  527688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:38:41.991501  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:42.023416  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:38:42.135090  527688 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:38:42.141360  527688 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:38:42.141400  527688 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:38:42.141412  527688 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:38:42.141419  527688 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:38:42.141431  527688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/addons for local assets ...
	I0929 12:38:42.141502  527688 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-292570/.minikube/files for local assets ...
	I0929 12:38:42.141608  527688 filesync.go:149] local asset: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem -> 2944252.pem in /etc/ssl/certs
	I0929 12:38:42.141733  527688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:38:42.155367  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 12:38:42.189782  527688 start.go:296] duration metric: took 198.477618ms for postStartSetup
	I0929 12:38:42.190212  527688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-800992
	I0929 12:38:42.222871  527688 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/config.json ...
	I0929 12:38:42.223227  527688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:38:42.223297  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:42.249866  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:38:42.350608  527688 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:38:42.356671  527688 start.go:128] duration metric: took 12.03232623s to createHost
	I0929 12:38:42.356694  527688 start.go:83] releasing machines lock for "calico-800992", held for 12.032504621s
	I0929 12:38:42.356780  527688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-800992
	I0929 12:38:42.391823  527688 ssh_runner.go:195] Run: cat /version.json
	I0929 12:38:42.391879  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:42.392237  527688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:38:42.392356  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:38:42.424666  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:38:42.440473  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:38:42.655865  527688 ssh_runner.go:195] Run: systemctl --version
	I0929 12:38:42.660373  527688 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 12:38:42.847316  527688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:38:42.855149  527688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:38:42.887427  527688 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:38:42.887583  527688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:38:42.933150  527688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 12:38:42.933211  527688 start.go:495] detecting cgroup driver to use...
	I0929 12:38:42.933269  527688 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 12:38:42.933352  527688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:38:42.959927  527688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:38:42.974101  527688 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:38:42.974221  527688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:38:42.994708  527688 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:38:43.013922  527688 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:38:43.145188  527688 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:38:43.273751  527688 docker.go:234] disabling docker service ...
	I0929 12:38:43.273882  527688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:38:43.308252  527688 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:38:43.325356  527688 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:38:43.442426  527688 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:38:43.572145  527688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:38:43.586221  527688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:38:43.604425  527688 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 12:38:43.604485  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.619338  527688 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 12:38:43.619409  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.631237  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.641680  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.652803  527688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:38:43.663420  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.680225  527688 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.699830  527688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:38:43.711000  527688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:38:43.721149  527688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:38:43.730874  527688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:38:43.855167  527688 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 12:38:44.034481  527688 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 12:38:44.034627  527688 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 12:38:44.040701  527688 start.go:563] Will wait 60s for crictl version
	I0929 12:38:44.040824  527688 ssh_runner.go:195] Run: which crictl
	I0929 12:38:44.045699  527688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:38:44.099957  527688 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 12:38:44.100108  527688 ssh_runner.go:195] Run: crio --version
	I0929 12:38:44.142018  527688 ssh_runner.go:195] Run: crio --version
	I0929 12:38:44.196336  527688 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 12:38:44.199261  527688 cli_runner.go:164] Run: docker network inspect calico-800992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:38:44.217005  527688 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 12:38:44.221027  527688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:38:44.232242  527688 kubeadm.go:875] updating cluster {Name:calico-800992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-800992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:38:44.232370  527688 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:38:44.232424  527688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:38:44.332561  527688 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:38:44.332588  527688 crio.go:433] Images already preloaded, skipping extraction
	I0929 12:38:44.332642  527688 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:38:44.377407  527688 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:38:44.377431  527688 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:38:44.377440  527688 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0929 12:38:44.377577  527688 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-800992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-800992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 12:38:44.377663  527688 ssh_runner.go:195] Run: crio config
	I0929 12:38:44.430755  527688 cni.go:84] Creating CNI manager for "calico"
	I0929 12:38:44.430784  527688 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:38:44.430808  527688 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-800992 NodeName:calico-800992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:38:44.430985  527688 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-800992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:38:44.431074  527688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:38:44.440806  527688 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:38:44.440896  527688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:38:44.454955  527688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 12:38:44.480315  527688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:38:44.500026  527688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 12:38:44.522849  527688 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:38:44.527418  527688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:38:44.541552  527688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:38:44.669282  527688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:38:44.688175  527688 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992 for IP: 192.168.76.2
	I0929 12:38:44.688193  527688 certs.go:194] generating shared ca certs ...
	I0929 12:38:44.688209  527688 certs.go:226] acquiring lock for ca certs: {Name:mkd338253a13587776ce07e6238e0355c4b0e958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:44.688425  527688 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key
	I0929 12:38:44.688468  527688 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key
	I0929 12:38:44.688476  527688 certs.go:256] generating profile certs ...
	I0929 12:38:44.688529  527688 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.key
	I0929 12:38:44.688542  527688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.crt with IP's: []
	I0929 12:38:45.211853  527688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.crt ...
	I0929 12:38:45.211969  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.crt: {Name:mk2c1bdd665b5ac8b99d606a0d72f1e94fecc90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.212262  527688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.key ...
	I0929 12:38:45.212346  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/client.key: {Name:mk81bcbdea6f9dc55ee6131cdb2380e0355db993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.212527  527688 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key.20b26f58
	I0929 12:38:45.212576  527688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt.20b26f58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0929 12:38:45.700562  527688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt.20b26f58 ...
	I0929 12:38:45.700635  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt.20b26f58: {Name:mkdf0bbe4a2fed54ee3bd8a7ed89c54586bf43a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.700839  527688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key.20b26f58 ...
	I0929 12:38:45.700877  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key.20b26f58: {Name:mk6e57689a1e9eb45ba1d3d08f97ce3ed4bcfd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.700997  527688 certs.go:381] copying /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt.20b26f58 -> /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt
	I0929 12:38:45.701115  527688 certs.go:385] copying /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key.20b26f58 -> /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key
	I0929 12:38:45.701225  527688 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.key
	I0929 12:38:45.701262  527688 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.crt with IP's: []
	I0929 12:38:45.922928  527688 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.crt ...
	I0929 12:38:45.923000  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.crt: {Name:mka99358f60118961f4ae7f7a0e353ba7c2e3cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.923965  527688 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.key ...
	I0929 12:38:45.924012  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.key: {Name:mkd41ad61247020ce62b6271e512039a78fddbf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:38:45.924901  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem (1338 bytes)
	W0929 12:38:45.924979  527688 certs.go:480] ignoring /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425_empty.pem, impossibly tiny 0 bytes
	I0929 12:38:45.925008  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 12:38:45.925070  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:38:45.925129  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:38:45.925178  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/certs/key.pem (1675 bytes)
	I0929 12:38:45.925260  527688 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem (1708 bytes)
	I0929 12:38:45.925868  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:38:45.952099  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:38:45.983375  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:38:46.015384  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:38:46.049033  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 12:38:46.075554  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:38:46.103201  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:38:46.129457  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/calico-800992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:38:46.156124  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/ssl/certs/2944252.pem --> /usr/share/ca-certificates/2944252.pem (1708 bytes)
	I0929 12:38:46.182126  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:38:46.207497  527688 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-292570/.minikube/certs/294425.pem --> /usr/share/ca-certificates/294425.pem (1338 bytes)
	I0929 12:38:46.232991  527688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:38:46.250950  527688 ssh_runner.go:195] Run: openssl version
	I0929 12:38:46.256511  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2944252.pem && ln -fs /usr/share/ca-certificates/2944252.pem /etc/ssl/certs/2944252.pem"
	I0929 12:38:46.265892  527688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2944252.pem
	I0929 12:38:46.269298  527688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:28 /usr/share/ca-certificates/2944252.pem
	I0929 12:38:46.269387  527688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2944252.pem
	I0929 12:38:46.276113  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2944252.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:38:46.286085  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:38:46.296711  527688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:38:46.313363  527688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:38:46.313435  527688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:38:46.333169  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:38:46.379690  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294425.pem && ln -fs /usr/share/ca-certificates/294425.pem /etc/ssl/certs/294425.pem"
	I0929 12:38:46.405129  527688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294425.pem
	I0929 12:38:46.409272  527688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:28 /usr/share/ca-certificates/294425.pem
	I0929 12:38:46.409339  527688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294425.pem
	I0929 12:38:46.417474  527688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294425.pem /etc/ssl/certs/51391683.0"
	I0929 12:38:46.427773  527688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:38:46.434147  527688 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:38:46.434201  527688 kubeadm.go:392] StartCluster: {Name:calico-800992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-800992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:38:46.434280  527688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:38:46.434350  527688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:38:46.471334  527688 cri.go:89] found id: ""
	I0929 12:38:46.471439  527688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:38:46.481898  527688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:38:46.492776  527688 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 12:38:46.492852  527688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:38:46.502567  527688 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:38:46.502592  527688 kubeadm.go:157] found existing configuration files:
	
	I0929 12:38:46.502655  527688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:38:46.511941  527688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:38:46.512058  527688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:38:46.521133  527688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:38:46.531705  527688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:38:46.531825  527688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:38:46.540900  527688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:38:46.549894  527688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:38:46.549995  527688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:38:46.558507  527688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:38:46.567770  527688 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:38:46.567891  527688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:38:46.577013  527688 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 12:38:46.626530  527688 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 12:38:46.626656  527688 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:38:46.647208  527688 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 12:38:46.647330  527688 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 12:38:46.647388  527688 kubeadm.go:310] OS: Linux
	I0929 12:38:46.647477  527688 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 12:38:46.647547  527688 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 12:38:46.647620  527688 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 12:38:46.647686  527688 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 12:38:46.647764  527688 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 12:38:46.647855  527688 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 12:38:46.647934  527688 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 12:38:46.648017  527688 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 12:38:46.648098  527688 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 12:38:46.721063  527688 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:38:46.721221  527688 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:38:46.721348  527688 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 12:38:46.735919  527688 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 12:38:46.741698  527688 out.go:252]   - Generating certificates and keys ...
	I0929 12:38:46.741824  527688 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:38:46.741934  527688 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:38:47.152347  527688 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:38:47.628443  527688 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:38:47.919700  527688 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:38:48.424308  527688 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:38:48.895131  527688 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:38:48.897831  527688 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-800992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 12:38:50.070225  527688 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:38:50.070871  527688 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-800992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 12:38:50.841254  527688 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:38:51.198625  527688 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:38:51.756261  527688 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:38:51.756608  527688 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:38:51.910358  527688 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:38:52.298676  527688 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 12:38:52.608086  527688 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:38:53.221137  527688 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:38:53.762545  527688 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:38:53.763194  527688 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:38:53.765854  527688 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 12:38:53.769281  527688 out.go:252]   - Booting up control plane ...
	I0929 12:38:53.769390  527688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:38:53.769484  527688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:38:53.769555  527688 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:38:53.780804  527688 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:38:53.781164  527688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 12:38:53.787461  527688 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 12:38:53.787812  527688 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:38:53.787862  527688 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:38:53.914748  527688 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 12:38:53.914875  527688 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 12:38:54.916177  527688 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00162943s
	I0929 12:38:54.920318  527688 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 12:38:54.920420  527688 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0929 12:38:54.920515  527688 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 12:38:54.920917  527688 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 12:38:57.420734  527688 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.499885514s
	I0929 12:38:59.381422  527688 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.461120132s
	I0929 12:39:00.923172  527688 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001252856s
	I0929 12:39:00.941524  527688 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:39:00.957248  527688 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:39:00.971365  527688 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:39:00.971587  527688 kubeadm.go:310] [mark-control-plane] Marking the node calico-800992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:39:00.983794  527688 kubeadm.go:310] [bootstrap-token] Using token: 6ria0v.ox2lhqs5e4j0c032
	I0929 12:39:00.986702  527688 out.go:252]   - Configuring RBAC rules ...
	I0929 12:39:00.986833  527688 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:39:00.991593  527688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:39:01.003333  527688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:39:01.010889  527688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:39:01.017896  527688 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:39:01.029464  527688 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:39:01.328946  527688 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:39:01.778496  527688 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:39:02.332222  527688 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:39:02.333585  527688 kubeadm.go:310] 
	I0929 12:39:02.333667  527688 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:39:02.333699  527688 kubeadm.go:310] 
	I0929 12:39:02.333804  527688 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:39:02.333811  527688 kubeadm.go:310] 
	I0929 12:39:02.333846  527688 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:39:02.333913  527688 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:39:02.334021  527688 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:39:02.334036  527688 kubeadm.go:310] 
	I0929 12:39:02.334113  527688 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:39:02.334132  527688 kubeadm.go:310] 
	I0929 12:39:02.334190  527688 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:39:02.334216  527688 kubeadm.go:310] 
	I0929 12:39:02.334368  527688 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:39:02.334473  527688 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:39:02.334548  527688 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:39:02.334553  527688 kubeadm.go:310] 
	I0929 12:39:02.334648  527688 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:39:02.334730  527688 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:39:02.334735  527688 kubeadm.go:310] 
	I0929 12:39:02.334877  527688 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6ria0v.ox2lhqs5e4j0c032 \
	I0929 12:39:02.335015  527688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f05eddd1015f8286cd14da3bbb5f4fa1c9488aa1ea754c6d0a74a9af6ec8883 \
	I0929 12:39:02.335044  527688 kubeadm.go:310] 	--control-plane 
	I0929 12:39:02.335070  527688 kubeadm.go:310] 
	I0929 12:39:02.335191  527688 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:39:02.335202  527688 kubeadm.go:310] 
	I0929 12:39:02.335318  527688 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6ria0v.ox2lhqs5e4j0c032 \
	I0929 12:39:02.335450  527688 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f05eddd1015f8286cd14da3bbb5f4fa1c9488aa1ea754c6d0a74a9af6ec8883 
	I0929 12:39:02.340883  527688 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 12:39:02.341127  527688 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 12:39:02.341240  527688 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:39:02.341261  527688 cni.go:84] Creating CNI manager for "calico"
	I0929 12:39:02.344520  527688 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 12:39:02.348245  527688 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 12:39:02.348275  527688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 12:39:02.382042  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 12:39:04.110832  527688 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.728752018s)
	I0929 12:39:04.110882  527688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:39:04.111015  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:04.111085  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-800992 minikube.k8s.io/updated_at=2025_09_29T12_39_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=calico-800992 minikube.k8s.io/primary=true
	I0929 12:39:04.245811  527688 ops.go:34] apiserver oom_adj: -16
	I0929 12:39:04.245913  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:04.746638  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:05.246768  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:05.746628  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:06.246085  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:06.746797  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:07.246709  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:07.746820  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:08.246483  527688 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:39:08.496534  527688 kubeadm.go:1105] duration metric: took 4.38556143s to wait for elevateKubeSystemPrivileges
	I0929 12:39:08.496562  527688 kubeadm.go:394] duration metric: took 22.062363928s to StartCluster
	I0929 12:39:08.496580  527688 settings.go:142] acquiring lock: {Name:mk8da0e06d1edc552f3cec9ed26678491ca734d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:39:08.496643  527688 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 12:39:08.497598  527688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/kubeconfig: {Name:mk84aa46812be3352ca2874bd06be6025c5058bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:39:08.497801  527688 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:39:08.497930  527688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:39:08.498187  527688 config.go:182] Loaded profile config "calico-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:39:08.498223  527688 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:39:08.498285  527688 addons.go:69] Setting storage-provisioner=true in profile "calico-800992"
	I0929 12:39:08.498299  527688 addons.go:238] Setting addon storage-provisioner=true in "calico-800992"
	I0929 12:39:08.498331  527688 host.go:66] Checking if "calico-800992" exists ...
	I0929 12:39:08.498994  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:39:08.499446  527688 addons.go:69] Setting default-storageclass=true in profile "calico-800992"
	I0929 12:39:08.499471  527688 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-800992"
	I0929 12:39:08.499735  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:39:08.500986  527688 out.go:179] * Verifying Kubernetes components...
	I0929 12:39:08.505487  527688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:39:08.562454  527688 addons.go:238] Setting addon default-storageclass=true in "calico-800992"
	I0929 12:39:08.562500  527688 host.go:66] Checking if "calico-800992" exists ...
	I0929 12:39:08.562991  527688 cli_runner.go:164] Run: docker container inspect calico-800992 --format={{.State.Status}}
	I0929 12:39:08.566262  527688 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:39:08.569169  527688 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:39:08.569188  527688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:39:08.569258  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:39:08.609398  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:39:08.613681  527688 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:39:08.613701  527688 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:39:08.613778  527688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-800992
	I0929 12:39:08.648577  527688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/calico-800992/id_rsa Username:docker}
	I0929 12:39:08.808857  527688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:39:08.810180  527688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:39:08.845415  527688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:39:08.929007  527688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:39:09.348065  527688 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0929 12:39:09.349915  527688 node_ready.go:35] waiting up to 15m0s for node "calico-800992" to be "Ready" ...
	I0929 12:39:09.548490  527688 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:39:09.551317  527688 addons.go:514] duration metric: took 1.05307654s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 12:39:09.853245  527688 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-800992" context rescaled to 1 replicas
	W0929 12:39:11.352938  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:13.353411  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:15.353690  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:17.853490  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:19.853591  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:22.353205  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:24.353336  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:26.353801  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:28.354679  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:30.853562  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:33.353659  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:35.354717  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:37.852937  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:40.353369  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:42.353901  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:44.853204  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:47.353228  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:49.353373  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:51.354010  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:53.853889  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:56.353472  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:39:58.353733  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:00.389606  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:02.854594  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:05.354021  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:07.857175  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:10.353783  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:12.355563  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:14.853447  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:16.854128  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:18.854970  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:21.353155  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:23.353497  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:25.354180  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:27.354271  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:29.853349  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:31.854283  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:33.854369  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:35.855366  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:38.353846  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:40.852989  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:42.853307  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:45.360410  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:47.853650  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:49.856672  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:52.353034  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:54.353130  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:56.353717  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:40:58.353798  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:00.355164  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:02.853250  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:04.854199  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:07.353944  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:09.852883  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:11.853263  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:14.353482  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:16.853789  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:19.353565  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:21.852846  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:23.854104  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:26.352919  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:28.353225  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:30.353618  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:32.854104  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:34.857071  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:37.353343  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:39.353938  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:41.854006  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:43.854816  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:46.353379  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:48.353865  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:50.853051  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:52.855074  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:55.354081  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:57.853050  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:41:59.853305  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:01.855048  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:04.353755  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:06.853899  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:08.853939  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:11.353858  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:13.853051  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:15.853777  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:18.354357  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:20.852990  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:23.353015  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:25.353953  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:27.853875  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:30.352706  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:32.352989  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:34.353082  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:36.353609  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:38.852954  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:40.853563  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:43.352976  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:45.358317  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:47.853574  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:50.353734  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:52.853496  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:55.354052  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:57.853010  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:42:59.853339  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:01.854025  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:04.356038  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:06.854150  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:09.352779  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:11.353214  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:13.854060  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:16.353176  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:18.853397  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:21.353770  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:23.853225  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:25.853836  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:28.353492  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:30.354228  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:32.852830  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:34.853986  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:37.352904  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:39.353195  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:41.353625  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:43.353836  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:45.357297  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:47.854432  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:49.857254  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:52.353602  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:54.854049  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:57.353326  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:43:59.852921  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:02.353521  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:04.853589  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:07.353144  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:09.354309  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:11.853041  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:13.853984  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:15.856185  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:18.353632  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:20.852981  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:22.853481  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:25.353031  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:27.852931  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:29.853820  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:32.353727  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:34.853061  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:36.853693  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:39.354151  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:41.853125  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:43.854091  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:45.854824  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:48.352804  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:50.853045  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:53.352957  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:55.852698  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:44:57.857860  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:00.365774  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:02.853272  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:05.353335  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:07.852831  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:09.853195  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:11.853949  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:14.353940  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:16.853684  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:19.354098  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:21.852831  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:24.353455  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:26.852971  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:28.853913  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:31.354452  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:33.356404  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:35.853298  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:37.853627  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:40.352865  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:42.353762  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:44.853508  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:46.853650  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:48.854042  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:51.353423  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:53.853641  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:55.853675  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:45:58.353226  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:00.359232  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:02.853272  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:05.353570  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:07.353788  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:09.853240  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:12.353034  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:14.353778  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:16.853724  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:18.854184  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:21.352972  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:23.355475  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:25.853647  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:27.854467  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:30.353707  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:32.852861  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:34.852914  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:36.853572  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:39.354204  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:41.357123  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:43.853180  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:46.352919  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:48.353355  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:50.853037  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:52.853628  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:55.353341  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:46:57.852941  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:00.359327  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:02.854089  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:05.353394  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:07.353505  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:09.353919  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:11.852835  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:13.853314  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:16.352861  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:18.352984  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:20.852902  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:22.853349  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:25.353253  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:27.852908  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:30.352773  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:32.353211  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:34.852950  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:36.853886  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:39.353153  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:41.853160  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:43.853613  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:46.353459  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:48.353639  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:50.852859  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:52.853603  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:55.354337  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:57.852954  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:47:59.853255  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:02.352989  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:04.353599  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:06.853356  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:09.352920  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:11.353025  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:13.353468  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:15.853090  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:18.353087  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:20.353358  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:22.353578  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:24.853348  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:27.353575  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:29.854157  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:32.353120  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:34.852755  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:36.853310  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:39.353071  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:41.353502  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:43.853800  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:46.352431  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:48.353823  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:50.852929  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:53.353677  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:55.853649  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:48:58.352769  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:00.355127  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:02.852714  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:04.853553  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:07.353404  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:09.852795  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:11.853956  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:14.352905  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:16.352982  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:18.853241  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:21.353302  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:23.852965  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:26.352817  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:28.352854  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:30.852888  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:32.853434  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:35.352935  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:37.353357  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:39.354018  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:41.852940  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:44.352814  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:46.353706  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:48.853631  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:51.353143  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:53.852783  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:55.852979  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:49:58.353031  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:00.354599  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:02.853628  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:05.353897  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:07.853405  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:10.353699  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:12.853246  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:14.853611  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:17.352922  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:19.353234  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:21.853240  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:23.854006  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:26.353326  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:28.853101  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:31.353226  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:33.852828  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:35.852990  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:37.853036  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:40.353055  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:42.353811  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:44.852943  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:47.352929  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:49.353578  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:51.852972  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:53.853179  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:56.352775  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:50:58.352874  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:00.355707  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:02.852949  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:04.853701  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:07.353526  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:09.852615  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:11.852904  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:14.353037  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:16.353571  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:18.353701  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:20.853317  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:23.352842  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:25.353663  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:27.852969  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:30.352956  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:32.353036  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:34.353629  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:36.853316  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:39.353596  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:41.852975  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:43.853104  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:46.352799  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:48.353975  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:50.852980  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:52.853703  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:55.352829  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:57.353180  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:51:59.852980  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:01.853203  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:04.352864  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:06.853089  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:09.353519  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:11.353797  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:13.853360  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:16.353287  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:18.853108  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:21.352808  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:23.353005  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:25.353076  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:27.852833  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:29.852891  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:31.853488  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:34.352817  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:36.353213  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:38.353593  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:40.852995  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:42.854191  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:45.355275  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:47.852889  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:49.853724  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:52.353068  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:54.353112  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:56.852803  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:52:59.353002  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:01.353422  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:03.353677  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:05.853834  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:08.352828  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:10.352876  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:12.352958  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:14.852756  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:16.853761  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:19.353614  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:21.353683  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:23.852971  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:25.853022  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:28.353796  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:30.853205  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:32.853331  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:35.353660  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:37.853458  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:40.353517  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:42.852882  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:44.853060  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:46.853146  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:49.353463  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:51.854813  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:54.353187  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:56.853400  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:53:59.352820  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:54:01.353483  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:54:03.853751  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:54:06.353000  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	W0929 12:54:08.853079  527688 node_ready.go:57] node "calico-800992" has "Ready":"False" status (will retry)
	I0929 12:54:09.350821  527688 node_ready.go:38] duration metric: took 15m0.000682152s for node "calico-800992" to be "Ready" ...
	I0929 12:54:09.353992  527688 out.go:203] 
	W0929 12:54:09.357001  527688 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0929 12:54:09.357023  527688 out.go:285] * 
	* 
	W0929 12:54:09.359155  527688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0929 12:54:09.363209  527688 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (939.51s)

                                                
                                    

Test pass (282/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 5.16
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.25
18 TestDownloadOnly/v1.34.0/DeleteAll 0.38
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 176.05
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.92
35 TestAddons/parallel/Registry 18.25
36 TestAddons/parallel/RegistryCreds 0.75
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 5.89
41 TestAddons/parallel/CSI 55.72
42 TestAddons/parallel/Headlamp 18.83
43 TestAddons/parallel/CloudSpanner 6.8
44 TestAddons/parallel/LocalPath 54.28
45 TestAddons/parallel/NvidiaDevicePlugin 6.92
46 TestAddons/parallel/Yakd 11.82
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 38.09
50 TestCertExpiration 242.05
52 TestForceSystemdFlag 41.48
53 TestForceSystemdEnv 41.87
59 TestErrorSpam/setup 31.57
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.04
62 TestErrorSpam/pause 1.64
63 TestErrorSpam/unpause 1.8
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 83.66
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 23.67
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.9
76 TestFunctional/serial/CacheCmd/cache/add_local 1.41
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.1
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.18
84 TestFunctional/serial/ExtraConfig 34.32
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.75
87 TestFunctional/serial/LogsFileCmd 1.71
88 TestFunctional/serial/InvalidService 4.31
90 TestFunctional/parallel/ConfigCmd 0.52
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 1.04
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.66
103 TestFunctional/parallel/CpCmd 2.27
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.67
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
114 TestFunctional/parallel/License 0.28
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 8.73
130 TestFunctional/parallel/MountCmd/specific-port 2.13
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
132 TestFunctional/parallel/ServiceCmd/List 1.3
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.32
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.15
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 206.87
163 TestMultiControlPlane/serial/DeployApp 9.65
164 TestMultiControlPlane/serial/PingHostFromPods 1.57
165 TestMultiControlPlane/serial/AddWorkerNode 59.08
166 TestMultiControlPlane/serial/NodeLabels 0.15
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.45
169 TestMultiControlPlane/serial/StopSecondaryNode 12.67
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
171 TestMultiControlPlane/serial/RestartSecondaryNode 34.4
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.99
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.8
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 35.53
177 TestMultiControlPlane/serial/RestartCluster 89.94
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
179 TestMultiControlPlane/serial/AddSecondaryNode 78.86
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
184 TestJSONOutput/start/Command 82.97
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.61
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.82
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 38.92
210 TestKicCustomNetwork/use_default_bridge_network 32.32
211 TestKicExistingNetwork 32.6
212 TestKicCustomSubnet 35.68
213 TestKicStaticIP 35.99
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 66.32
218 TestMountStart/serial/StartWithMountFirst 7.78
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 6.17
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 8.29
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 138.08
230 TestMultiNode/serial/DeployApp2Nodes 6.57
231 TestMultiNode/serial/PingHostFrom2Pods 1
232 TestMultiNode/serial/AddNode 55.14
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.7
235 TestMultiNode/serial/CopyFile 10.09
236 TestMultiNode/serial/StopNode 2.28
237 TestMultiNode/serial/StartAfterStop 7.75
238 TestMultiNode/serial/RestartKeepsNodes 78.93
239 TestMultiNode/serial/DeleteNode 5.55
240 TestMultiNode/serial/StopMultiNode 23.8
241 TestMultiNode/serial/RestartMultiNode 57.59
242 TestMultiNode/serial/ValidateNameConflict 30.93
247 TestPreload 134.89
249 TestScheduledStopUnix 108.42
252 TestInsufficientStorage 13.12
253 TestRunningBinaryUpgrade 54.05
255 TestKubernetesUpgrade 344.01
256 TestMissingContainerUpgrade 111.27
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestNoKubernetes/serial/StartWithK8s 50.8
260 TestNoKubernetes/serial/StartWithStopK8s 116.71
261 TestNoKubernetes/serial/Start 8.33
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 34.29
264 TestNoKubernetes/serial/Stop 1.21
265 TestNoKubernetes/serial/StartNoArgs 6.87
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
267 TestStoppedBinaryUpgrade/Setup 0.83
268 TestStoppedBinaryUpgrade/Upgrade 55.72
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.57
278 TestPause/serial/Start 82.1
279 TestPause/serial/SecondStartNoReconfiguration 29.18
280 TestPause/serial/Pause 0.75
281 TestPause/serial/VerifyStatus 0.36
282 TestPause/serial/Unpause 0.65
283 TestPause/serial/PauseAgain 0.84
284 TestPause/serial/DeletePaused 3.05
285 TestPause/serial/VerifyDeletedResources 1.33
293 TestNetworkPlugins/group/false 5.05
298 TestStartStop/group/old-k8s-version/serial/FirstStart 89.9
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
301 TestStartStop/group/old-k8s-version/serial/Stop 11.89
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 56.14
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
307 TestStartStop/group/old-k8s-version/serial/Pause 4.08
309 TestStartStop/group/no-preload/serial/FirstStart 77.85
311 TestStartStop/group/embed-certs/serial/FirstStart 61.94
312 TestStartStop/group/embed-certs/serial/DeployApp 10.42
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
314 TestStartStop/group/embed-certs/serial/Stop 12.14
315 TestStartStop/group/no-preload/serial/DeployApp 10.48
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.52
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
318 TestStartStop/group/embed-certs/serial/SecondStart 52.46
319 TestStartStop/group/no-preload/serial/Stop 11.97
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
321 TestStartStop/group/no-preload/serial/SecondStart 54.78
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.31
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
325 TestStartStop/group/embed-certs/serial/Pause 3.43
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.55
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
331 TestStartStop/group/no-preload/serial/Pause 3.72
333 TestStartStop/group/newest-cni/serial/FirstStart 41.69
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
336 TestStartStop/group/newest-cni/serial/Stop 1.24
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
338 TestStartStop/group/newest-cni/serial/SecondStart 15.4
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
342 TestStartStop/group/newest-cni/serial/Pause 3.03
343 TestNetworkPlugins/group/auto/Start 83.14
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.47
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.5
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.89
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 11.29
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.39
355 TestNetworkPlugins/group/auto/DNS 0.23
356 TestNetworkPlugins/group/auto/Localhost 0.21
357 TestNetworkPlugins/group/auto/HairPin 0.19
358 TestNetworkPlugins/group/kindnet/Start 84.12
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
362 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
363 TestNetworkPlugins/group/kindnet/DNS 0.18
364 TestNetworkPlugins/group/kindnet/Localhost 0.22
365 TestNetworkPlugins/group/kindnet/HairPin 0.15
366 TestNetworkPlugins/group/custom-flannel/Start 61.43
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
369 TestNetworkPlugins/group/custom-flannel/DNS 0.19
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
372 TestNetworkPlugins/group/enable-default-cni/Start 77.48
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
378 TestNetworkPlugins/group/flannel/Start 59.66
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
381 TestNetworkPlugins/group/flannel/NetCatPod 10.32
382 TestNetworkPlugins/group/flannel/DNS 0.18
383 TestNetworkPlugins/group/flannel/Localhost 0.15
384 TestNetworkPlugins/group/flannel/HairPin 0.18
385 TestNetworkPlugins/group/bridge/Start 46.57
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
387 TestNetworkPlugins/group/bridge/NetCatPod 11.27
388 TestNetworkPlugins/group/bridge/DNS 0.18
389 TestNetworkPlugins/group/bridge/Localhost 0.16
390 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-069242 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-069242 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.436682044s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 11:20:20.679156  294425 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 11:20:20.679236  294425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-069242
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-069242: exit status 85 (70.087333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-069242 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-069242 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:15.289419  294430 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:15.289575  294430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:15.289596  294430 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:15.289602  294430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:15.289864  294430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	W0929 11:20:15.290005  294430 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21656-292570/.minikube/config/config.json: open /home/jenkins/minikube-integration/21656-292570/.minikube/config/config.json: no such file or directory
	I0929 11:20:15.290398  294430 out.go:368] Setting JSON to true
	I0929 11:20:15.291220  294430 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3766,"bootTime":1759141049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:20:15.291284  294430 start.go:140] virtualization:  
	I0929 11:20:15.295340  294430 out.go:99] [download-only-069242] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0929 11:20:15.295487  294430 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 11:20:15.295550  294430 notify.go:220] Checking for updates...
	I0929 11:20:15.298304  294430 out.go:171] MINIKUBE_LOCATION=21656
	I0929 11:20:15.301190  294430 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:15.304007  294430 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:20:15.306857  294430 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:20:15.309807  294430 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 11:20:15.315503  294430 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:20:15.315785  294430 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:15.337617  294430 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:20:15.337726  294430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:15.398707  294430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 11:20:15.38993515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:15.398819  294430 docker.go:318] overlay module found
	I0929 11:20:15.401697  294430 out.go:99] Using the docker driver based on user configuration
	I0929 11:20:15.401742  294430 start.go:304] selected driver: docker
	I0929 11:20:15.401754  294430 start.go:924] validating driver "docker" against <nil>
	I0929 11:20:15.401895  294430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:15.460324  294430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 11:20:15.451168888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:15.460528  294430 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:20:15.460834  294430 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 11:20:15.460995  294430 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:20:15.464053  294430 out.go:171] Using Docker driver with root privileges
	I0929 11:20:15.467070  294430 cni.go:84] Creating CNI manager for ""
	I0929 11:20:15.467149  294430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:20:15.467164  294430 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:20:15.467253  294430 start.go:348] cluster config:
	{Name:download-only-069242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-069242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:15.470456  294430 out.go:99] Starting "download-only-069242" primary control-plane node in "download-only-069242" cluster
	I0929 11:20:15.470491  294430 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:20:15.473413  294430 out.go:99] Pulling base image v0.0.48 ...
	I0929 11:20:15.473447  294430 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:20:15.473619  294430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:20:15.489914  294430 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:20:15.490077  294430 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:20:15.490171  294430 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:20:15.541048  294430 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:20:15.541069  294430 cache.go:58] Caching tarball of preloaded images
	I0929 11:20:15.541219  294430 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:20:15.544551  294430 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 11:20:15.544586  294430 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:15.635624  294430 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:20:18.671271  294430 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:18.671435  294430 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:19.608752  294430 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0929 11:20:19.609205  294430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/download-only-069242/config.json ...
	I0929 11:20:19.609267  294430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/download-only-069242/config.json: {Name:mk58bb121ef08d7fdaee616109c767ddeb120fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:19.610086  294430 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:20:19.610899  294430 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21656-292570/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-069242 host does not exist
	  To start a cluster, run: "minikube start -p download-only-069242"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-069242
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-254595 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-254595 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.159079616s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 11:20:26.260664  294425 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 11:20:26.260713  294425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-254595
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-254595: exit status 85 (253.223ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-069242 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-069242 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ delete  │ -p download-only-069242                                                                                                                                                   │ download-only-069242 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ start   │ -o=json --download-only -p download-only-254595 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-254595 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:21.146233  294634 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:21.146425  294634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:21.146457  294634 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:21.146479  294634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:21.146739  294634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:20:21.147195  294634 out.go:368] Setting JSON to true
	I0929 11:20:21.148047  294634 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3772,"bootTime":1759141049,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:20:21.148148  294634 start.go:140] virtualization:  
	I0929 11:20:21.149617  294634 out.go:99] [download-only-254595] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:20:21.149905  294634 notify.go:220] Checking for updates...
	I0929 11:20:21.151619  294634 out.go:171] MINIKUBE_LOCATION=21656
	I0929 11:20:21.153407  294634 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:21.154676  294634 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:20:21.155699  294634 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:20:21.156945  294634 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 11:20:21.159312  294634 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:20:21.159612  294634 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:21.185800  294634 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:20:21.185919  294634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:21.246704  294634 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 11:20:21.23726641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:21.246828  294634 docker.go:318] overlay module found
	I0929 11:20:21.248029  294634 out.go:99] Using the docker driver based on user configuration
	I0929 11:20:21.248067  294634 start.go:304] selected driver: docker
	I0929 11:20:21.248084  294634 start.go:924] validating driver "docker" against <nil>
	I0929 11:20:21.248194  294634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:21.301586  294634 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 11:20:21.292756464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:20:21.301802  294634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:20:21.302065  294634 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 11:20:21.302231  294634 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:20:21.303623  294634 out.go:171] Using Docker driver with root privileges
	I0929 11:20:21.304769  294634 cni.go:84] Creating CNI manager for ""
	I0929 11:20:21.304840  294634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:20:21.304856  294634 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:20:21.304950  294634 start.go:348] cluster config:
	{Name:download-only-254595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-254595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:21.306257  294634 out.go:99] Starting "download-only-254595" primary control-plane node in "download-only-254595" cluster
	I0929 11:20:21.306278  294634 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:20:21.307327  294634 out.go:99] Pulling base image v0.0.48 ...
	I0929 11:20:21.307349  294634 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:21.307517  294634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:20:21.323271  294634 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:20:21.323410  294634 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:20:21.323433  294634 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 11:20:21.323442  294634 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 11:20:21.323449  294634 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 11:20:21.358243  294634 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:20:21.358273  294634 cache.go:58] Caching tarball of preloaded images
	I0929 11:20:21.359092  294634 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:21.360474  294634 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 11:20:21.360504  294634 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:21.448417  294634 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0929 11:20:24.580661  294634 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:24.580768  294634 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21656-292570/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0929 11:20:25.566791  294634 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:20:25.567149  294634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/download-only-254595/config.json ...
	I0929 11:20:25.567185  294634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/download-only-254595/config.json: {Name:mkcdd8ae1be976075092ae8c2154b6f3be721359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:20:25.567991  294634 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:20:25.568177  294634 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21656-292570/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-254595 host does not exist
	  To start a cluster, run: "minikube start -p download-only-254595"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-254595
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 11:20:28.256180  294425 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-752167 --alsologtostderr --binary-mirror http://127.0.0.1:35747 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-752167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-752167
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-571100
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-571100: exit status 85 (76.599286ms)

                                                
                                                
-- stdout --
	* Profile "addons-571100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-571100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-571100
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-571100: exit status 85 (78.652941ms)

                                                
                                                
-- stdout --
	* Profile "addons-571100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-571100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (176.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-571100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-571100 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m56.051259465s)
--- PASS: TestAddons/Setup (176.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-571100 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-571100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-571100 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-571100 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [921ffba4-a3ab-4bfd-bb59-475482940d9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [921ffba4-a3ab-4bfd-bb59-475482940d9c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004630292s
addons_test.go:694: (dbg) Run:  kubectl --context addons-571100 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-571100 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-571100 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-571100 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.055995ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-gd6vp" [db513fbd-2518-40a7-b7ea-0ac4f5c3ffb4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003027673s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-587sf" [194fd12a-17d7-4a48-a7f9-68d05e2cacf7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003246648s
addons_test.go:392: (dbg) Run:  kubectl --context addons-571100 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-571100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-571100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.10713575s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 ip
2025/09/29 11:24:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.25s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.236437ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-571100
addons_test.go:332: (dbg) Run:  kubectl --context addons-571100 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qlp49" [d56f8142-7de6-4c8c-be2e-7e4733337bea] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00326643s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 49.783348ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jxwdz" [ad1132fd-f81a-44bf-876f-2792665cb535] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003547494s
addons_test.go:463: (dbg) Run:  kubectl --context addons-571100 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 11:24:29.305679  294425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 11:24:29.309804  294425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 11:24:29.309830  294425 kapi.go:107] duration metric: took 4.163179ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.171852ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-571100 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-571100 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [50a24da7-af85-4080-b036-a579dbccba29] Pending
helpers_test.go:352: "task-pv-pod" [50a24da7-af85-4080-b036-a579dbccba29] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [50a24da7-af85-4080-b036-a579dbccba29] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003816798s
addons_test.go:572: (dbg) Run:  kubectl --context addons-571100 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-571100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-571100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-571100 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-571100 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-571100 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-571100 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f41528c3-9dfe-49b0-b522-97d9556b1b54] Pending
helpers_test.go:352: "task-pv-pod-restore" [f41528c3-9dfe-49b0-b522-97d9556b1b54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f41528c3-9dfe-49b0-b522-97d9556b1b54] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003273397s
addons_test.go:614: (dbg) Run:  kubectl --context addons-571100 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-571100 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-571100 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.911782476s)
--- PASS: TestAddons/parallel/CSI (55.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-571100 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-571100 --alsologtostderr -v=1: (1.018182089s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-k6k88" [6b7afba2-8ae4-4d20-b971-ad05d2ab1d3c] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-k6k88" [6b7afba2-8ae4-4d20-b971-ad05d2ab1d3c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-k6k88" [6b7afba2-8ae4-4d20-b971-ad05d2ab1d3c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003058246s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable headlamp --alsologtostderr -v=1: (5.810505684s)
--- PASS: TestAddons/parallel/Headlamp (18.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-bggxw" [70c1265e-b201-4de0-8039-adfaa90ea853] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003070495s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-571100 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-571100 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [baa82fe2-ec03-4d68-bfb2-bff9b60f9d2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [baa82fe2-ec03-4d68-bfb2-bff9b60f9d2c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [baa82fe2-ec03-4d68-bfb2-bff9b60f9d2c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00469397s
addons_test.go:967: (dbg) Run:  kubectl --context addons-571100 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 ssh "cat /opt/local-path-provisioner/pvc-a0cc8561-b2cc-4ce3-a55e-46c1fefc7753_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-571100 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-571100 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.656258454s)
--- PASS: TestAddons/parallel/LocalPath (54.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gxxwj" [85b56a75-af0c-410c-871f-17e79640d55c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004403841s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-wd7hk" [a909b83b-c42d-493a-9c2f-67204fe6773e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003065457s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-571100 addons disable yakd --alsologtostderr -v=1: (5.814599219s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-571100
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-571100: (11.91595235s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-571100
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-571100
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-571100
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (38.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-855488 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-855488 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.378402309s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-855488 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-855488 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-855488 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-855488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-855488
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-855488: (1.961701669s)
--- PASS: TestCertOptions (38.09s)

                                                
                                    
x
+
TestCertExpiration (242.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303394 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0929 12:28:25.872427  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303394 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.223080944s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303394 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303394 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.053184443s)
helpers_test.go:175: Cleaning up "cert-expiration-303394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-303394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-303394: (2.770354315s)
--- PASS: TestCertExpiration (242.05s)

                                                
                                    
x
+
TestForceSystemdFlag (41.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-369490 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-369490 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.529104518s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-369490 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-369490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-369490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-369490: (2.588352728s)
--- PASS: TestForceSystemdFlag (41.48s)

                                                
                                    
x
+
TestForceSystemdEnv (41.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-767477 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-767477 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.261675257s)
helpers_test.go:175: Cleaning up "force-systemd-env-767477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-767477
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-767477: (2.606158992s)
--- PASS: TestForceSystemdEnv (41.87s)

                                                
                                    
x
+
TestErrorSpam/setup (31.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-315316 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-315316 --driver=docker  --container-runtime=crio
E0929 11:28:25.881074  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:25.887408  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:25.898716  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:25.920034  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:25.961345  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:26.042695  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:26.204143  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:26.525444  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:27.167377  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:28.448680  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:28:31.010640  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-315316 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-315316 --driver=docker  --container-runtime=crio: (31.565141578s)
--- PASS: TestErrorSpam/setup (31.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 unpause
E0929 11:28:36.132272  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 stop: (1.225168541s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315316 --log_dir /tmp/nospam-315316 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21656-292570/.minikube/files/etc/test/nested/copy/294425/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0929 11:28:46.374083  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:06.855479  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:47.817312  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-686485 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m23.654288141s)
--- PASS: TestFunctional/serial/StartWithProxy (83.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 11:30:08.119764  294425 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-686485 --alsologtostderr -v=8: (23.670252188s)
functional_test.go:678: soft start took 23.67237218s for "functional-686485" cluster.
I0929 11:30:31.790337  294425 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (23.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-686485 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:3.1: (1.319599674s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:3.3: (1.333265335s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 cache add registry.k8s.io/pause:latest: (1.250422693s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686485 /tmp/TestFunctionalserialCacheCmdcacheadd_local3231704932/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache add minikube-local-cache-test:functional-686485
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache delete minikube-local-cache-test:functional-686485
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686485
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.241624ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 cache reload: (1.105301981s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 kubectl -- --context functional-686485 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-686485 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 11:31:09.741152  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-686485 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.324150375s)
functional_test.go:776: restart took 34.324252784s for "functional-686485" cluster.
I0929 11:31:14.500691  294425 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (34.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-686485 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 logs: (1.745101541s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 logs --file /tmp/TestFunctionalserialLogsFileCmd3683012320/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 logs --file /tmp/TestFunctionalserialLogsFileCmd3683012320/001/logs.txt: (1.712565265s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-686485 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-686485
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-686485: exit status 115 (634.118172ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30900 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-686485 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 config get cpus: exit status 14 (90.576131ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 config get cpus: exit status 14 (93.512603ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-686485 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.273919ms)

                                                
                                                
-- stdout --
	* [functional-686485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:45:53.604878  324289 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:45:53.605115  324289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.605148  324289 out.go:374] Setting ErrFile to fd 2...
	I0929 11:45:53.605170  324289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.605466  324289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:45:53.605867  324289 out.go:368] Setting JSON to false
	I0929 11:45:53.606771  324289 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5305,"bootTime":1759141049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:45:53.606872  324289 start.go:140] virtualization:  
	I0929 11:45:53.609933  324289 out.go:179] * [functional-686485] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 11:45:53.613501  324289 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:45:53.613580  324289 notify.go:220] Checking for updates...
	I0929 11:45:53.619270  324289 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:45:53.622084  324289 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:45:53.624985  324289 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:45:53.627816  324289 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:45:53.630592  324289 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:45:53.633805  324289 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:45:53.634451  324289 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:45:53.665636  324289 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:45:53.665771  324289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:45:53.721530  324289 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:45:53.712273655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:45:53.721641  324289 docker.go:318] overlay module found
	I0929 11:45:53.724918  324289 out.go:179] * Using the docker driver based on existing profile
	I0929 11:45:53.727700  324289 start.go:304] selected driver: docker
	I0929 11:45:53.727718  324289 start.go:924] validating driver "docker" against &{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:45:53.727812  324289 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:45:53.731292  324289 out.go:203] 
	W0929 11:45:53.734046  324289 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 11:45:53.736940  324289 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-686485 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-686485 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (196.652191ms)

                                                
                                                
-- stdout --
	* [functional-686485] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:45:53.413639  324242 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:45:53.413747  324242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.413761  324242 out.go:374] Setting ErrFile to fd 2...
	I0929 11:45:53.413766  324242 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:45:53.414125  324242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:45:53.414483  324242 out.go:368] Setting JSON to false
	I0929 11:45:53.415306  324242 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5304,"bootTime":1759141049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 11:45:53.415375  324242 start.go:140] virtualization:  
	I0929 11:45:53.419206  324242 out.go:179] * [functional-686485] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0929 11:45:53.423028  324242 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:45:53.423085  324242 notify.go:220] Checking for updates...
	I0929 11:45:53.428723  324242 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:45:53.431464  324242 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 11:45:53.434286  324242 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 11:45:53.437089  324242 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 11:45:53.439909  324242 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:45:53.443261  324242 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:45:53.443868  324242 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:45:53.470051  324242 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 11:45:53.470216  324242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:45:53.524755  324242 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 11:45:53.515532943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:45:53.524872  324242 docker.go:318] overlay module found
	I0929 11:45:53.529841  324242 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:45:53.532797  324242 start.go:304] selected driver: docker
	I0929 11:45:53.532845  324242 start.go:924] validating driver "docker" against &{Name:functional-686485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-686485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:45:53.532968  324242 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:45:53.536508  324242 out.go:203] 
	W0929 11:45:53.539248  324242 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:45:53.542091  324242 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh -n functional-686485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cp functional-686485:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd798651978/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh -n functional-686485 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh -n functional-686485 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294425/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /etc/test/nested/copy/294425/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294425.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /etc/ssl/certs/294425.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294425.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /usr/share/ca-certificates/294425.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2944252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /etc/ssl/certs/2944252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2944252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /usr/share/ca-certificates/2944252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-686485 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "sudo systemctl is-active docker": exit status 1 (271.095569ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "sudo systemctl is-active containerd": exit status 1 (269.149686ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 320249: os: process already finished
helpers_test.go:525: unable to kill pid 320076: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-686485 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "364.090538ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.297246ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "346.771239ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.390458ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdany-port1635415389/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759146339489315865" to /tmp/TestFunctionalparallelMountCmdany-port1635415389/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759146339489315865" to /tmp/TestFunctionalparallelMountCmdany-port1635415389/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759146339489315865" to /tmp/TestFunctionalparallelMountCmdany-port1635415389/001/test-1759146339489315865
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.970531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:45:39.820749  294425 retry.go:31] will retry after 402.328643ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 11:45 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 11:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 11:45 test-1759146339489315865
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh cat /mount-9p/test-1759146339489315865
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-686485 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cc01027d-a939-4b18-ab61-3037f90d074c] Pending
helpers_test.go:352: "busybox-mount" [cc01027d-a939-4b18-ab61-3037f90d074c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cc01027d-a939-4b18-ab61-3037f90d074c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cc01027d-a939-4b18-ab61-3037f90d074c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004102829s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-686485 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdany-port1635415389/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdspecific-port2750840362/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (337.698232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:45:48.561703  294425 retry.go:31] will retry after 728.470348ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdspecific-port2750840362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "sudo umount -f /mount-9p": exit status 1 (307.5517ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-686485 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdspecific-port2750840362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T" /mount1: exit status 1 (521.650718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:45:50.883821  294425 retry.go:31] will retry after 558.789559ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-686485 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-686485 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763218251/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 service list: (1.29704938s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 service list -o json: (1.318288643s)
functional_test.go:1504: Took "1.318393394s" to run "out/minikube-linux-arm64 -p functional-686485 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 version -o=json --components: (1.153312576s)
--- PASS: TestFunctional/parallel/Version/components (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-686485 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-686485
localhost/kicbase/echo-server:functional-686485
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-686485 image ls --format short --alsologtostderr:
I0929 11:47:34.851538  326734 out.go:360] Setting OutFile to fd 1 ...
I0929 11:47:34.851756  326734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:34.851788  326734 out.go:374] Setting ErrFile to fd 2...
I0929 11:47:34.851810  326734 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:34.852079  326734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:47:34.852828  326734 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:34.852985  326734 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:34.853461  326734 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:47:34.870314  326734 ssh_runner.go:195] Run: systemctl --version
I0929 11:47:34.870365  326734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:47:34.888833  326734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:47:34.984840  326734 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-686485 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-686485  │ 31400e40e3ea9 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ localhost/kicbase/echo-server           │ functional-686485  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-686485  │ 9a8672b8cb6ca │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-686485 image ls --format table --alsologtostderr:
I0929 11:47:39.295670  327078 out.go:360] Setting OutFile to fd 1 ...
I0929 11:47:39.295876  327078 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:39.295888  327078 out.go:374] Setting ErrFile to fd 2...
I0929 11:47:39.295894  327078 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:39.296181  327078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:47:39.296891  327078 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:39.297147  327078 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:39.297663  327078 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:47:39.315405  327078 ssh_runner.go:195] Run: systemctl --version
I0929 11:47:39.315458  327078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:47:39.333614  327078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:47:39.428714  327078 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-686485 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c
549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-686485"],"size":"4788229"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4
b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"e5d594a7e0ed553575ba02f398d2b850c1bae5fad759ab6b9b19ff67b2a8c255","repoDigests":["docker.io/library/121271b2771b64527d788c1450a80101f37f2f1e7e4bb306a3c0ef9c6af7d7f0-tmp@sha256:bc033107736523d956c179457c5f5162f24bbde8ed99dc14a4f35377a30ec40f"],"repoTags":[],"size":"1637644"},{"id":"9a8672b8cb6ca97b6a6fce5f670220cb80403125fa328d2dc7866ff42b3b1c2a","repoDigests":["localhost/minikube-local-cache-test@sha256:feeb124b52cfd81782c2d5afc3761ac02faf37dd953643e653d323820a2ad3a4"],"repoTags":["localhost/minikube-local-cache-test:functional-686485"],"size":"3330"},{"id":"31400e40e3ea9d3d8e72d50f129137794d1b0b08d2680685b2ba1f368c22b0e0","repoDigests":["localhost/my-
image@sha256:6dc3ab0d5f5077ea21d759bfcb92ca1e96acee88c944e4d626d47ac2d5273e38"],"repoTags":["localhost/my-image:functional-686485"],"size":"1640225"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364d
a8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags
":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-686485 image ls --format json --alsologtostderr:
I0929 11:47:39.070542  327048 out.go:360] Setting OutFile to fd 1 ...
I0929 11:47:39.070660  327048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:39.070671  327048 out.go:374] Setting ErrFile to fd 2...
I0929 11:47:39.070676  327048 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:39.070912  327048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:47:39.071546  327048 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:39.071659  327048 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:39.072159  327048 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:47:39.090037  327048 ssh_runner.go:195] Run: systemctl --version
I0929 11:47:39.090100  327048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:47:39.107142  327048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:47:39.200414  327048 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-686485 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-686485
size: "4788229"
- id: 9a8672b8cb6ca97b6a6fce5f670220cb80403125fa328d2dc7866ff42b3b1c2a
repoDigests:
- localhost/minikube-local-cache-test@sha256:feeb124b52cfd81782c2d5afc3761ac02faf37dd953643e653d323820a2ad3a4
repoTags:
- localhost/minikube-local-cache-test:functional-686485
size: "3330"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-686485 image ls --format yaml --alsologtostderr:
I0929 11:47:35.086722  326765 out.go:360] Setting OutFile to fd 1 ...
I0929 11:47:35.086841  326765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:35.086853  326765 out.go:374] Setting ErrFile to fd 2...
I0929 11:47:35.086857  326765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:35.087106  326765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:47:35.087822  326765 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:35.087944  326765 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:35.088504  326765 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:47:35.107190  326765 ssh_runner.go:195] Run: systemctl --version
I0929 11:47:35.107253  326765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:47:35.125802  326765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:47:35.220825  326765 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-686485 ssh pgrep buildkitd: exit status 1 (276.379039ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image build -t localhost/my-image:functional-686485 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 image build -t localhost/my-image:functional-686485 testdata/build --alsologtostderr: (3.220473989s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-686485 image build -t localhost/my-image:functional-686485 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e5d594a7e0e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-686485
--> 31400e40e3e
Successfully tagged localhost/my-image:functional-686485
31400e40e3ea9d3d8e72d50f129137794d1b0b08d2680685b2ba1f368c22b0e0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-686485 image build -t localhost/my-image:functional-686485 testdata/build --alsologtostderr:
I0929 11:47:35.597737  326853 out.go:360] Setting OutFile to fd 1 ...
I0929 11:47:35.598614  326853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:35.598656  326853 out.go:374] Setting ErrFile to fd 2...
I0929 11:47:35.598679  326853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:47:35.598967  326853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
I0929 11:47:35.599625  326853 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:35.600382  326853 config.go:182] Loaded profile config "functional-686485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:47:35.600921  326853 cli_runner.go:164] Run: docker container inspect functional-686485 --format={{.State.Status}}
I0929 11:47:35.617900  326853 ssh_runner.go:195] Run: systemctl --version
I0929 11:47:35.617956  326853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686485
I0929 11:47:35.634843  326853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/functional-686485/id_rsa Username:docker}
I0929 11:47:35.728788  326853 build_images.go:161] Building image from path: /tmp/build.3007011603.tar
I0929 11:47:35.728851  326853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:47:35.737567  326853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3007011603.tar
I0929 11:47:35.740855  326853 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3007011603.tar: stat -c "%s %y" /var/lib/minikube/build/build.3007011603.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3007011603.tar': No such file or directory
I0929 11:47:35.740889  326853 ssh_runner.go:362] scp /tmp/build.3007011603.tar --> /var/lib/minikube/build/build.3007011603.tar (3072 bytes)
I0929 11:47:35.765501  326853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3007011603
I0929 11:47:35.774573  326853 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3007011603 -xf /var/lib/minikube/build/build.3007011603.tar
I0929 11:47:35.783565  326853 crio.go:315] Building image: /var/lib/minikube/build/build.3007011603
I0929 11:47:35.783716  326853 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-686485 /var/lib/minikube/build/build.3007011603 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0929 11:47:38.740239  326853 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-686485 /var/lib/minikube/build/build.3007011603 --cgroup-manager=cgroupfs: (2.956493793s)
I0929 11:47:38.740329  326853 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3007011603
I0929 11:47:38.748941  326853 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3007011603.tar
I0929 11:47:38.757216  326853 build_images.go:217] Built localhost/my-image:functional-686485 from /tmp/build.3007011603.tar
I0929 11:47:38.757247  326853 build_images.go:133] succeeded building to: functional-686485
I0929 11:47:38.757253  326853 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-686485
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image load --daemon kicbase/echo-server:functional-686485 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-686485 image load --daemon kicbase/echo-server:functional-686485 --alsologtostderr: (1.119349127s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image load --daemon kicbase/echo-server:functional-686485 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-686485
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image load --daemon kicbase/echo-server:functional-686485 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image save kicbase/echo-server:functional-686485 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image rm kicbase/echo-server:functional-686485 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-686485
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 image save --daemon kicbase/echo-server:functional-686485 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-686485
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 update-context --alsologtostderr -v=2
E0929 11:48:25.872449  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-686485 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-686485
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-686485
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686485
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 11:51:24.256196  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.262581  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.273950  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.295386  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.336768  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.418190  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.579700  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:24.901338  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:25.543381  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:26.824681  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:29.386171  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:34.507757  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:51:44.750043  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:05.231420  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:46.192707  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:53:25.872658  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:54:08.114131  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m26.025473946s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (206.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 kubectl -- rollout status deployment/busybox: (6.818252621s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-8s7cg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-lf5m4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-w5nfw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-8s7cg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-lf5m4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-w5nfw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-8s7cg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-lf5m4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-w5nfw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-8s7cg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-8s7cg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-lf5m4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-lf5m4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-w5nfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 kubectl -- exec busybox-7b57f96db7-w5nfw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 node add --alsologtostderr -v 5: (58.041213193s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5: (1.041058362s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-106038 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077213935s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp testdata/cp-test.txt ha-106038:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418300918/001/cp-test_ha-106038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038:/home/docker/cp-test.txt ha-106038-m02:/home/docker/cp-test_ha-106038_ha-106038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test_ha-106038_ha-106038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038:/home/docker/cp-test.txt ha-106038-m03:/home/docker/cp-test_ha-106038_ha-106038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test_ha-106038_ha-106038-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038:/home/docker/cp-test.txt ha-106038-m04:/home/docker/cp-test_ha-106038_ha-106038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test_ha-106038_ha-106038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp testdata/cp-test.txt ha-106038-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418300918/001/cp-test_ha-106038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m02:/home/docker/cp-test.txt ha-106038:/home/docker/cp-test_ha-106038-m02_ha-106038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test_ha-106038-m02_ha-106038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m02:/home/docker/cp-test.txt ha-106038-m03:/home/docker/cp-test_ha-106038-m02_ha-106038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test_ha-106038-m02_ha-106038-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m02:/home/docker/cp-test.txt ha-106038-m04:/home/docker/cp-test_ha-106038-m02_ha-106038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test_ha-106038-m02_ha-106038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp testdata/cp-test.txt ha-106038-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418300918/001/cp-test_ha-106038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m03:/home/docker/cp-test.txt ha-106038:/home/docker/cp-test_ha-106038-m03_ha-106038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test_ha-106038-m03_ha-106038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m03:/home/docker/cp-test.txt ha-106038-m02:/home/docker/cp-test_ha-106038-m03_ha-106038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test_ha-106038-m03_ha-106038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m03:/home/docker/cp-test.txt ha-106038-m04:/home/docker/cp-test_ha-106038-m03_ha-106038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test_ha-106038-m03_ha-106038-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp testdata/cp-test.txt ha-106038-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418300918/001/cp-test_ha-106038-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m04:/home/docker/cp-test.txt ha-106038:/home/docker/cp-test_ha-106038-m04_ha-106038.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038 "sudo cat /home/docker/cp-test_ha-106038-m04_ha-106038.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m04:/home/docker/cp-test.txt ha-106038-m02:/home/docker/cp-test_ha-106038-m04_ha-106038-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m02 "sudo cat /home/docker/cp-test_ha-106038-m04_ha-106038-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 cp ha-106038-m04:/home/docker/cp-test.txt ha-106038-m03:/home/docker/cp-test_ha-106038-m04_ha-106038-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 ssh -n ha-106038-m03 "sudo cat /home/docker/cp-test_ha-106038-m04_ha-106038-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 node stop m02 --alsologtostderr -v 5: (11.928385323s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5: exit status 7 (738.029263ms)

                                                
                                                
-- stdout --
	ha-106038
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-106038-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-106038-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-106038-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:56:09.234682  343307 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:56:09.234878  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:56:09.234906  343307 out.go:374] Setting ErrFile to fd 2...
	I0929 11:56:09.234926  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:56:09.235196  343307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:56:09.235431  343307 out.go:368] Setting JSON to false
	I0929 11:56:09.235488  343307 mustload.go:65] Loading cluster: ha-106038
	I0929 11:56:09.235565  343307 notify.go:220] Checking for updates...
	I0929 11:56:09.236968  343307 config.go:182] Loaded profile config "ha-106038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:56:09.237022  343307 status.go:174] checking status of ha-106038 ...
	I0929 11:56:09.237733  343307 cli_runner.go:164] Run: docker container inspect ha-106038 --format={{.State.Status}}
	I0929 11:56:09.257569  343307 status.go:371] ha-106038 host status = "Running" (err=<nil>)
	I0929 11:56:09.257593  343307 host.go:66] Checking if "ha-106038" exists ...
	I0929 11:56:09.257891  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-106038
	I0929 11:56:09.284412  343307 host.go:66] Checking if "ha-106038" exists ...
	I0929 11:56:09.284836  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:56:09.286995  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-106038
	I0929 11:56:09.306670  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/ha-106038/id_rsa Username:docker}
	I0929 11:56:09.402098  343307 ssh_runner.go:195] Run: systemctl --version
	I0929 11:56:09.406401  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:56:09.417684  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:56:09.476110  343307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-29 11:56:09.465046562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 11:56:09.476721  343307 kubeconfig.go:125] found "ha-106038" server: "https://192.168.49.254:8443"
	I0929 11:56:09.476764  343307 api_server.go:166] Checking apiserver status ...
	I0929 11:56:09.476827  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:56:09.489964  343307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I0929 11:56:09.499803  343307 api_server.go:182] apiserver freezer: "10:freezer:/docker/fdc225c27db517e7587fd7b09ef9b4754811206735bed841533579ec12d95043/crio/crio-d47b5b48fb0a88835ff794fccf914aeec89d2077fecfbfab278ce5abc39d022b"
	I0929 11:56:09.499880  343307 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fdc225c27db517e7587fd7b09ef9b4754811206735bed841533579ec12d95043/crio/crio-d47b5b48fb0a88835ff794fccf914aeec89d2077fecfbfab278ce5abc39d022b/freezer.state
	I0929 11:56:09.510357  343307 api_server.go:204] freezer state: "THAWED"
	I0929 11:56:09.510386  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:56:09.518967  343307 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:56:09.518997  343307 status.go:463] ha-106038 apiserver status = Running (err=<nil>)
	I0929 11:56:09.519015  343307 status.go:176] ha-106038 status: &{Name:ha-106038 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:56:09.519032  343307 status.go:174] checking status of ha-106038-m02 ...
	I0929 11:56:09.519332  343307 cli_runner.go:164] Run: docker container inspect ha-106038-m02 --format={{.State.Status}}
	I0929 11:56:09.538338  343307 status.go:371] ha-106038-m02 host status = "Stopped" (err=<nil>)
	I0929 11:56:09.538367  343307 status.go:384] host is not running, skipping remaining checks
	I0929 11:56:09.538374  343307 status.go:176] ha-106038-m02 status: &{Name:ha-106038-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:56:09.538395  343307 status.go:174] checking status of ha-106038-m03 ...
	I0929 11:56:09.538747  343307 cli_runner.go:164] Run: docker container inspect ha-106038-m03 --format={{.State.Status}}
	I0929 11:56:09.557189  343307 status.go:371] ha-106038-m03 host status = "Running" (err=<nil>)
	I0929 11:56:09.557219  343307 host.go:66] Checking if "ha-106038-m03" exists ...
	I0929 11:56:09.557550  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-106038-m03
	I0929 11:56:09.575103  343307 host.go:66] Checking if "ha-106038-m03" exists ...
	I0929 11:56:09.575457  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:56:09.575508  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-106038-m03
	I0929 11:56:09.595321  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/ha-106038-m03/id_rsa Username:docker}
	I0929 11:56:09.693842  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:56:09.707355  343307 kubeconfig.go:125] found "ha-106038" server: "https://192.168.49.254:8443"
	I0929 11:56:09.707395  343307 api_server.go:166] Checking apiserver status ...
	I0929 11:56:09.707441  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:56:09.720976  343307 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1392/cgroup
	I0929 11:56:09.730891  343307 api_server.go:182] apiserver freezer: "10:freezer:/docker/8b3b260395b348763d739c7a096ce10d65695b58afe5075bd6e1ccdaab58cea3/crio/crio-22a5691fd59baaba15e88816dac4eb59afb610073344f70759e1713e59bbbdfa"
	I0929 11:56:09.730961  343307 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b3b260395b348763d739c7a096ce10d65695b58afe5075bd6e1ccdaab58cea3/crio/crio-22a5691fd59baaba15e88816dac4eb59afb610073344f70759e1713e59bbbdfa/freezer.state
	I0929 11:56:09.739942  343307 api_server.go:204] freezer state: "THAWED"
	I0929 11:56:09.739975  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:56:09.749897  343307 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:56:09.749927  343307 status.go:463] ha-106038-m03 apiserver status = Running (err=<nil>)
	I0929 11:56:09.749937  343307 status.go:176] ha-106038-m03 status: &{Name:ha-106038-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:56:09.749962  343307 status.go:174] checking status of ha-106038-m04 ...
	I0929 11:56:09.750279  343307 cli_runner.go:164] Run: docker container inspect ha-106038-m04 --format={{.State.Status}}
	I0929 11:56:09.770577  343307 status.go:371] ha-106038-m04 host status = "Running" (err=<nil>)
	I0929 11:56:09.770606  343307 host.go:66] Checking if "ha-106038-m04" exists ...
	I0929 11:56:09.770950  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-106038-m04
	I0929 11:56:09.789333  343307 host.go:66] Checking if "ha-106038-m04" exists ...
	I0929 11:56:09.789821  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:56:09.789879  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-106038-m04
	I0929 11:56:09.807210  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/ha-106038-m04/id_rsa Username:docker}
	I0929 11:56:09.902384  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:56:09.915322  343307 status.go:176] ha-106038-m04 status: &{Name:ha-106038-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node start m02 --alsologtostderr -v 5
E0929 11:56:24.256433  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 node start m02 --alsologtostderr -v 5: (32.822084134s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5: (1.408229184s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.416996841s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 stop --alsologtostderr -v 5
E0929 11:56:51.956440  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 stop --alsologtostderr -v 5: (36.890274643s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 start --wait true --alsologtostderr -v 5
E0929 11:58:25.872473  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 start --wait true --alsologtostderr -v 5: (1m35.910865897s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 node delete m03 --alsologtostderr -v 5: (11.880599896s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 stop --alsologtostderr -v 5: (35.423035738s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5: exit status 7 (109.109721ms)

                                                
                                                
-- stdout --
	ha-106038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-106038-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-106038-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:59:48.597354  357507 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:59:48.597471  357507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:59:48.597481  357507 out.go:374] Setting ErrFile to fd 2...
	I0929 11:59:48.597486  357507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:59:48.597753  357507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 11:59:48.597960  357507 out.go:368] Setting JSON to false
	I0929 11:59:48.597994  357507 mustload.go:65] Loading cluster: ha-106038
	I0929 11:59:48.598089  357507 notify.go:220] Checking for updates...
	I0929 11:59:48.598400  357507 config.go:182] Loaded profile config "ha-106038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:59:48.598453  357507 status.go:174] checking status of ha-106038 ...
	I0929 11:59:48.599279  357507 cli_runner.go:164] Run: docker container inspect ha-106038 --format={{.State.Status}}
	I0929 11:59:48.617238  357507 status.go:371] ha-106038 host status = "Stopped" (err=<nil>)
	I0929 11:59:48.617257  357507 status.go:384] host is not running, skipping remaining checks
	I0929 11:59:48.617264  357507 status.go:176] ha-106038 status: &{Name:ha-106038 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:59:48.617290  357507 status.go:174] checking status of ha-106038-m02 ...
	I0929 11:59:48.617575  357507 cli_runner.go:164] Run: docker container inspect ha-106038-m02 --format={{.State.Status}}
	I0929 11:59:48.637760  357507 status.go:371] ha-106038-m02 host status = "Stopped" (err=<nil>)
	I0929 11:59:48.637783  357507 status.go:384] host is not running, skipping remaining checks
	I0929 11:59:48.637800  357507 status.go:176] ha-106038-m02 status: &{Name:ha-106038-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:59:48.637820  357507 status.go:174] checking status of ha-106038-m04 ...
	I0929 11:59:48.638112  357507 cli_runner.go:164] Run: docker container inspect ha-106038-m04 --format={{.State.Status}}
	I0929 11:59:48.654953  357507 status.go:371] ha-106038-m04 host status = "Stopped" (err=<nil>)
	I0929 11:59:48.654975  357507 status.go:384] host is not running, skipping remaining checks
	I0929 11:59:48.654982  357507 status.go:176] ha-106038-m04 status: &{Name:ha-106038-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.969897421s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 node add --control-plane --alsologtostderr -v 5
E0929 12:01:24.256988  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:01:28.946711  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 node add --control-plane --alsologtostderr -v 5: (1m17.844860284s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-106038 status --alsologtostderr -v 5: (1.014807612s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.024900818s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-507588 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 12:03:25.873118  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-507588 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.961847703s)
--- PASS: TestJSONOutput/start/Command (82.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-507588 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-507588 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-507588 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-507588 --output=json --user=testUser: (5.817514785s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-592450 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-592450 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.979691ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"344756d3-e59b-4425-b557-2e86e13bf99a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-592450] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18c9f4b4-70eb-4be4-a5ec-1e0e54e80252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"89a67a82-ca18-431f-afbc-143cf1083753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8a763299-66e7-4fa6-bf17-8c8b460aca4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig"}}
	{"specversion":"1.0","id":"c187742e-636b-4d27-a45f-28fb23ad127b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube"}}
	{"specversion":"1.0","id":"6c640e41-fae0-49ec-8507-2ee534af448e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"09e6fe0f-d83d-4362-8c05-3bb993a0b8d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9953c2e9-2387-4611-a394-32b9d1a6afc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-592450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-592450
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-766859 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-766859 --network=: (36.501579983s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-766859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-766859
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-766859: (2.393932717s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-457492 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-457492 --network=bridge: (30.314596311s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-457492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-457492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-457492: (1.980997042s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.32s)

                                                
                                    
x
+
TestKicExistingNetwork (32.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 12:05:33.597511  294425 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 12:05:33.613510  294425 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 12:05:33.614430  294425 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 12:05:33.614474  294425 cli_runner.go:164] Run: docker network inspect existing-network
W0929 12:05:33.630488  294425 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 12:05:33.630519  294425 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 12:05:33.630537  294425 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 12:05:33.630653  294425 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 12:05:33.649199  294425 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-412f8ec3d590 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:74:dd:ee:56:f0} reservation:<nil>}
I0929 12:05:33.649549  294425 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400232f280}
I0929 12:05:33.649582  294425 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 12:05:33.649635  294425 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 12:05:33.710479  294425 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-506146 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-506146 --network=existing-network: (30.416520213s)
helpers_test.go:175: Cleaning up "existing-network-506146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-506146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-506146: (2.034761967s)
I0929 12:06:06.177659  294425 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.60s)

                                                
                                    
x
+
TestKicCustomSubnet (35.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-359191 --subnet=192.168.60.0/24
E0929 12:06:24.260464  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-359191 --subnet=192.168.60.0/24: (33.542707727s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-359191 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-359191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-359191
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-359191: (2.116973392s)
--- PASS: TestKicCustomSubnet (35.68s)

                                                
                                    
x
+
TestKicStaticIP (35.99s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-902134 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-902134 --static-ip=192.168.200.200: (33.698086469s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-902134 ip
helpers_test.go:175: Cleaning up "static-ip-902134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-902134
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-902134: (2.143878808s)
--- PASS: TestKicStaticIP (35.99s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-569941 --driver=docker  --container-runtime=crio
E0929 12:07:47.318319  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-569941 --driver=docker  --container-runtime=crio: (30.475428378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-572641 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-572641 --driver=docker  --container-runtime=crio: (30.599266149s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-569941
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-572641
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-572641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-572641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-572641: (1.986005126s)
helpers_test.go:175: Cleaning up "first-569941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-569941
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-569941: (1.903941101s)
--- PASS: TestMinikubeProfile (66.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-973073 --memory=3072 --mount-string /tmp/TestMountStartserial1019579685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0929 12:08:25.872404  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-973073 --memory=3072 --mount-string /tmp/TestMountStartserial1019579685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.779713968s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-973073 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-974788 --memory=3072 --mount-string /tmp/TestMountStartserial1019579685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-974788 --memory=3072 --mount-string /tmp/TestMountStartserial1019579685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.166536568s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-974788 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-973073 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-973073 --alsologtostderr -v=5: (1.636891335s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-974788 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-974788
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-974788: (1.19226974s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-974788
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-974788: (7.294302821s)
--- PASS: TestMountStart/serial/RestartStopped (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-974788 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-037220 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-037220 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.561439079s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-037220 -- rollout status deployment/busybox: (4.717172466s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-6xnq8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-tztwg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-6xnq8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-tztwg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-6xnq8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-tztwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-6xnq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-6xnq8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-tztwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-037220 -- exec busybox-7b57f96db7-tztwg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-037220 -v=5 --alsologtostderr
E0929 12:11:24.256416  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-037220 -v=5 --alsologtostderr: (54.477454015s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-037220 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp testdata/cp-test.txt multinode-037220:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4151406262/001/cp-test_multinode-037220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220:/home/docker/cp-test.txt multinode-037220-m02:/home/docker/cp-test_multinode-037220_multinode-037220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test_multinode-037220_multinode-037220-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220:/home/docker/cp-test.txt multinode-037220-m03:/home/docker/cp-test_multinode-037220_multinode-037220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test_multinode-037220_multinode-037220-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp testdata/cp-test.txt multinode-037220-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4151406262/001/cp-test_multinode-037220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m02:/home/docker/cp-test.txt multinode-037220:/home/docker/cp-test_multinode-037220-m02_multinode-037220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test_multinode-037220-m02_multinode-037220.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m02:/home/docker/cp-test.txt multinode-037220-m03:/home/docker/cp-test_multinode-037220-m02_multinode-037220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test_multinode-037220-m02_multinode-037220-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp testdata/cp-test.txt multinode-037220-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4151406262/001/cp-test_multinode-037220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m03:/home/docker/cp-test.txt multinode-037220:/home/docker/cp-test_multinode-037220-m03_multinode-037220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220 "sudo cat /home/docker/cp-test_multinode-037220-m03_multinode-037220.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 cp multinode-037220-m03:/home/docker/cp-test.txt multinode-037220-m02:/home/docker/cp-test_multinode-037220-m03_multinode-037220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 ssh -n multinode-037220-m02 "sudo cat /home/docker/cp-test_multinode-037220-m03_multinode-037220-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-037220 node stop m03: (1.23261121s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-037220 status: exit status 7 (525.069651ms)

                                                
                                                
-- stdout --
	multinode-037220
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-037220-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-037220-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr: exit status 7 (525.516622ms)

                                                
                                                
-- stdout --
	multinode-037220
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-037220-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-037220-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:12:25.646854  410803 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:12:25.646994  410803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:12:25.647007  410803 out.go:374] Setting ErrFile to fd 2...
	I0929 12:12:25.647012  410803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:12:25.647306  410803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 12:12:25.647578  410803 out.go:368] Setting JSON to false
	I0929 12:12:25.647670  410803 mustload.go:65] Loading cluster: multinode-037220
	I0929 12:12:25.647743  410803 notify.go:220] Checking for updates...
	I0929 12:12:25.648788  410803 config.go:182] Loaded profile config "multinode-037220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:12:25.648836  410803 status.go:174] checking status of multinode-037220 ...
	I0929 12:12:25.649491  410803 cli_runner.go:164] Run: docker container inspect multinode-037220 --format={{.State.Status}}
	I0929 12:12:25.670567  410803 status.go:371] multinode-037220 host status = "Running" (err=<nil>)
	I0929 12:12:25.670589  410803 host.go:66] Checking if "multinode-037220" exists ...
	I0929 12:12:25.670882  410803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-037220
	I0929 12:12:25.693929  410803 host.go:66] Checking if "multinode-037220" exists ...
	I0929 12:12:25.694239  410803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:12:25.694291  410803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-037220
	I0929 12:12:25.712658  410803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/multinode-037220/id_rsa Username:docker}
	I0929 12:12:25.809399  410803 ssh_runner.go:195] Run: systemctl --version
	I0929 12:12:25.813595  410803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:12:25.825351  410803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:12:25.888124  410803 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-29 12:12:25.878373213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 12:12:25.888760  410803 kubeconfig.go:125] found "multinode-037220" server: "https://192.168.67.2:8443"
	I0929 12:12:25.888807  410803 api_server.go:166] Checking apiserver status ...
	I0929 12:12:25.888854  410803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:12:25.900683  410803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	I0929 12:12:25.909889  410803 api_server.go:182] apiserver freezer: "10:freezer:/docker/bbcd68f343d32046b43d202895eccf6d008f8fd75df5cf7e17cf01d1f6ada495/crio/crio-d773699666aa5aaadcaf3411c91406a1c28f2141c0d41b313d93e5847d758d6d"
	I0929 12:12:25.909960  410803 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbcd68f343d32046b43d202895eccf6d008f8fd75df5cf7e17cf01d1f6ada495/crio/crio-d773699666aa5aaadcaf3411c91406a1c28f2141c0d41b313d93e5847d758d6d/freezer.state
	I0929 12:12:25.918241  410803 api_server.go:204] freezer state: "THAWED"
	I0929 12:12:25.918272  410803 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 12:12:25.926660  410803 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 12:12:25.926687  410803 status.go:463] multinode-037220 apiserver status = Running (err=<nil>)
	I0929 12:12:25.926698  410803 status.go:176] multinode-037220 status: &{Name:multinode-037220 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:12:25.926714  410803 status.go:174] checking status of multinode-037220-m02 ...
	I0929 12:12:25.927013  410803 cli_runner.go:164] Run: docker container inspect multinode-037220-m02 --format={{.State.Status}}
	I0929 12:12:25.945225  410803 status.go:371] multinode-037220-m02 host status = "Running" (err=<nil>)
	I0929 12:12:25.945250  410803 host.go:66] Checking if "multinode-037220-m02" exists ...
	I0929 12:12:25.945582  410803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-037220-m02
	I0929 12:12:25.961916  410803 host.go:66] Checking if "multinode-037220-m02" exists ...
	I0929 12:12:25.962217  410803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:12:25.962261  410803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-037220-m02
	I0929 12:12:25.981399  410803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21656-292570/.minikube/machines/multinode-037220-m02/id_rsa Username:docker}
	I0929 12:12:26.081521  410803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:12:26.093705  410803 status.go:176] multinode-037220-m02 status: &{Name:multinode-037220-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:12:26.093738  410803 status.go:174] checking status of multinode-037220-m03 ...
	I0929 12:12:26.094044  410803 cli_runner.go:164] Run: docker container inspect multinode-037220-m03 --format={{.State.Status}}
	I0929 12:12:26.110549  410803 status.go:371] multinode-037220-m03 host status = "Stopped" (err=<nil>)
	I0929 12:12:26.110575  410803 status.go:384] host is not running, skipping remaining checks
	I0929 12:12:26.110582  410803 status.go:176] multinode-037220-m03 status: &{Name:multinode-037220-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-037220 node start m03 -v=5 --alsologtostderr: (6.978565165s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-037220
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-037220
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-037220: (24.765230709s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-037220 --wait=true -v=5 --alsologtostderr
E0929 12:13:25.873166  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-037220 --wait=true -v=5 --alsologtostderr: (54.036379839s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-037220
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-037220 node delete m03: (4.877701658s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-037220 stop: (23.604087805s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-037220 status: exit status 7 (100.356816ms)

                                                
                                                
-- stdout --
	multinode-037220
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-037220-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr: exit status 7 (97.449057ms)

                                                
                                                
-- stdout --
	multinode-037220
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-037220-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:14:22.094230  418684 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:14:22.094418  418684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:14:22.094450  418684 out.go:374] Setting ErrFile to fd 2...
	I0929 12:14:22.094472  418684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:14:22.094758  418684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 12:14:22.094994  418684 out.go:368] Setting JSON to false
	I0929 12:14:22.095055  418684 mustload.go:65] Loading cluster: multinode-037220
	I0929 12:14:22.095545  418684 config.go:182] Loaded profile config "multinode-037220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:14:22.095605  418684 status.go:174] checking status of multinode-037220 ...
	I0929 12:14:22.096165  418684 cli_runner.go:164] Run: docker container inspect multinode-037220 --format={{.State.Status}}
	I0929 12:14:22.095108  418684 notify.go:220] Checking for updates...
	I0929 12:14:22.116502  418684 status.go:371] multinode-037220 host status = "Stopped" (err=<nil>)
	I0929 12:14:22.116529  418684 status.go:384] host is not running, skipping remaining checks
	I0929 12:14:22.116536  418684 status.go:176] multinode-037220 status: &{Name:multinode-037220 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:14:22.116569  418684 status.go:174] checking status of multinode-037220-m02 ...
	I0929 12:14:22.116875  418684 cli_runner.go:164] Run: docker container inspect multinode-037220-m02 --format={{.State.Status}}
	I0929 12:14:22.139804  418684 status.go:371] multinode-037220-m02 host status = "Stopped" (err=<nil>)
	I0929 12:14:22.139828  418684 status.go:384] host is not running, skipping remaining checks
	I0929 12:14:22.139835  418684 status.go:176] multinode-037220-m02 status: &{Name:multinode-037220-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-037220 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-037220 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (56.917970543s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-037220 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-037220
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-037220-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-037220-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.493784ms)

                                                
                                                
-- stdout --
	* [multinode-037220-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-037220-m02' is duplicated with machine name 'multinode-037220-m02' in profile 'multinode-037220'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-037220-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-037220-m03 --driver=docker  --container-runtime=crio: (28.492856487s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-037220
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-037220: exit status 80 (325.098471ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-037220 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-037220-m03 already exists in multinode-037220-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-037220-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-037220-m03: (1.96870046s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.93s)

                                                
                                    
x
+
TestPreload (134.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-888284 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0929 12:16:24.256368  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-888284 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.784595909s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-888284 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-888284 image pull gcr.io/k8s-minikube/busybox: (3.786150462s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-888284
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-888284: (5.918473894s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-888284 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-888284 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (59.837993859s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-888284 image list
helpers_test.go:175: Cleaning up "test-preload-888284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-888284
E0929 12:18:08.948435  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-888284: (2.339717956s)
--- PASS: TestPreload (134.89s)

                                                
                                    
x
+
TestScheduledStopUnix (108.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-578343 --memory=3072 --driver=docker  --container-runtime=crio
E0929 12:18:25.872221  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-578343 --memory=3072 --driver=docker  --container-runtime=crio: (32.653490581s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578343 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-578343 -n scheduled-stop-578343
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578343 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 12:18:42.764418  294425 retry.go:31] will retry after 92.469µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.766507  294425 retry.go:31] will retry after 176.206µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.766806  294425 retry.go:31] will retry after 148.204µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.767460  294425 retry.go:31] will retry after 172.467µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.768720  294425 retry.go:31] will retry after 583.008µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.769900  294425 retry.go:31] will retry after 404.589µs: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.771749  294425 retry.go:31] will retry after 1.325322ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.773956  294425 retry.go:31] will retry after 1.866573ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.776155  294425 retry.go:31] will retry after 2.74757ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.779367  294425 retry.go:31] will retry after 4.72517ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.784590  294425 retry.go:31] will retry after 7.378866ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.792811  294425 retry.go:31] will retry after 5.106317ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.799117  294425 retry.go:31] will retry after 6.974222ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.807119  294425 retry.go:31] will retry after 17.735031ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.825339  294425 retry.go:31] will retry after 33.846508ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
I0929 12:18:42.859571  294425 retry.go:31] will retry after 25.577174ms: open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/scheduled-stop-578343/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578343 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578343 -n scheduled-stop-578343
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-578343
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-578343 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-578343
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-578343: exit status 7 (72.394062ms)

                                                
                                                
-- stdout --
	scheduled-stop-578343
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578343 -n scheduled-stop-578343
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-578343 -n scheduled-stop-578343: exit status 7 (66.637523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-578343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-578343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-578343: (4.196609224s)
--- PASS: TestScheduledStopUnix (108.42s)

                                                
                                    
x
+
TestInsufficientStorage (13.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-536641 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-536641 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.666218374s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0dea79d9-f33f-4010-a86b-028b3973674e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-536641] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fac5367b-71a0-4c8e-a40a-a55f98831a4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"fa16f8ff-ea80-4288-913d-c5cdc9df149e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce8c02c8-06f5-4f54-8950-30863639325d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig"}}
	{"specversion":"1.0","id":"3b652708-a824-4cc6-ae69-22ae1976e2be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube"}}
	{"specversion":"1.0","id":"35c5e072-f229-45cb-9217-95f989573a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8ec68a24-b597-4f72-9834-c5f2fc15dc7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f32d778-1b9f-465c-ad2b-9dcdebc482cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4a912e1d-b98e-46b0-bbc5-f9067f3f0cf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"555c2960-a9ac-4579-b8b0-ef3653cb5b2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c37faea-59f0-4c8e-87a5-443b56f7f136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6db28a0b-1ff6-4002-b563-fc9fc7cb58ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-536641\" primary control-plane node in \"insufficient-storage-536641\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b8b4096-c146-4fd6-8de3-6a8d30028969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"db8c0931-d654-4b23-aeb2-395876cd5651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6736365-1400-4037-b8a5-b3a825c80fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-536641 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-536641 --output=json --layout=cluster: exit status 7 (284.552132ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-536641","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-536641","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 12:20:08.958474  436059 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-536641" does not appear in /home/jenkins/minikube-integration/21656-292570/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-536641 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-536641 --output=json --layout=cluster: exit status 7 (306.537602ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-536641","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-536641","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 12:20:09.266779  436121 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-536641" does not appear in /home/jenkins/minikube-integration/21656-292570/kubeconfig
	E0929 12:20:09.276615  436121 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/insufficient-storage-536641/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-536641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-536641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-536641: (1.863683474s)
--- PASS: TestInsufficientStorage (13.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3698660113 start -p running-upgrade-839250 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3698660113 start -p running-upgrade-839250 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.492195471s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-839250 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-839250 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.343884774s)
helpers_test.go:175: Cleaning up "running-upgrade-839250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-839250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-839250: (2.182881914s)
--- PASS: TestRunningBinaryUpgrade (54.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.254889589s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-612628
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-612628: (1.254265413s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-612628 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-612628 status --format={{.Host}}: exit status 7 (86.955992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.545680034s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-612628 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (128.963819ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-612628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-612628
	    minikube start -p kubernetes-upgrade-612628 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6126282 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-612628 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-612628 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.204924845s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-612628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-612628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-612628: (2.395616804s)
--- PASS: TestKubernetesUpgrade (344.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (111.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4184036972 start -p missing-upgrade-471056 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4184036972 start -p missing-upgrade-471056 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.855319028s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-471056
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-471056
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-471056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0929 12:21:24.260898  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-471056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.486667382s)
helpers_test.go:175: Cleaning up "missing-upgrade-471056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-471056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-471056: (2.152901874s)
--- PASS: TestMissingContainerUpgrade (111.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (74.341854ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-721757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-721757 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-721757 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (50.289549146s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-721757 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (116.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m54.13882652s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-721757 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-721757 status -o json: exit status 2 (427.093815ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-721757","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-721757
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-721757: (2.139643912s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (116.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-721757 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.329645912s)
--- PASS: TestNoKubernetes/serial/Start (8.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-721757 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-721757 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.652735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
E0929 12:23:25.873101  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (19.258212458s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (15.027994128s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-721757
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-721757: (1.207147237s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-721757 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-721757 --driver=docker  --container-runtime=crio: (6.866351994s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-721757 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-721757 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.398052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.771774857 start -p stopped-upgrade-300490 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0929 12:24:27.319975  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.771774857 start -p stopped-upgrade-300490 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.660450584s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.771774857 -p stopped-upgrade-300490 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.771774857 -p stopped-upgrade-300490 stop: (1.275236089s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-300490 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-300490 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.785410163s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-300490
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-300490: (1.569410805s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                    
x
+
TestPause/serial/Start (82.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0929 12:26:24.256766  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-052634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.10347195s)
--- PASS: TestPause/serial/Start (82.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-052634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.15812659s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-052634 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-052634 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-052634 --output=json --layout=cluster: exit status 2 (361.614568ms)

                                                
                                                
-- stdout --
	{"Name":"pause-052634","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-052634","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-052634 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-052634 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-052634 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-052634 --alsologtostderr -v=5: (3.053100857s)
--- PASS: TestPause/serial/DeletePaused (3.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.221241878s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-052634
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-052634: exit status 1 (27.577399ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-052634: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-800992 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-800992 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (382.33813ms)

                                                
                                                
-- stdout --
	* [false-800992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:27:51.906644  473293 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:27:51.906888  473293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:27:51.906916  473293 out.go:374] Setting ErrFile to fd 2...
	I0929 12:27:51.906935  473293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:27:51.907241  473293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-292570/.minikube/bin
	I0929 12:27:51.907696  473293 out.go:368] Setting JSON to false
	I0929 12:27:51.909843  473293 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7823,"bootTime":1759141049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0929 12:27:51.909968  473293 start.go:140] virtualization:  
	I0929 12:27:51.914064  473293 out.go:179] * [false-800992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 12:27:51.919167  473293 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 12:27:51.919345  473293 notify.go:220] Checking for updates...
	I0929 12:27:51.924835  473293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:27:51.927767  473293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-292570/kubeconfig
	I0929 12:27:51.930642  473293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-292570/.minikube
	I0929 12:27:51.933453  473293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 12:27:51.936441  473293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:27:51.942031  473293 config.go:182] Loaded profile config "force-systemd-flag-369490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:27:51.942139  473293 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:27:51.995021  473293 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 12:27:51.995144  473293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:27:52.174550  473293 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-29 12:27:52.159116231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 12:27:52.174655  473293 docker.go:318] overlay module found
	I0929 12:27:52.178286  473293 out.go:179] * Using the docker driver based on user configuration
	I0929 12:27:52.181253  473293 start.go:304] selected driver: docker
	I0929 12:27:52.181269  473293 start.go:924] validating driver "docker" against <nil>
	I0929 12:27:52.181282  473293 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:27:52.184940  473293 out.go:203] 
	W0929 12:27:52.187815  473293 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 12:27:52.194056  473293 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-800992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-800992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800992"

                                                
                                                
----------------------- debugLogs end: false-800992 [took: 4.443706988s] --------------------------------
helpers_test.go:175: Cleaning up "false-800992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-800992
--- PASS: TestNetworkPlugins/group/false (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (89.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-156938 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-156938 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m29.898540343s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (89.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-156938 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0e294a99-f7d0-44cd-965f-3ebb270a76ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0e294a99-f7d0-44cd-965f-3ebb270a76ff] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003475771s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-156938 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-156938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-156938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090001881s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-156938 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-156938 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-156938 --alsologtostderr -v=3: (11.894524995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156938 -n old-k8s-version-156938
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156938 -n old-k8s-version-156938: exit status 7 (73.902242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-156938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-156938 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0929 12:31:24.256436  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-156938 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.774986012s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-156938 -n old-k8s-version-156938
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pqlhj" [7f03b79f-d075-4096-bdfb-39c10065e83f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004069024s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pqlhj" [7f03b79f-d075-4096-bdfb-39c10065e83f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003564277s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-156938 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-156938 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-156938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-156938 --alsologtostderr -v=1: (1.219897204s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156938 -n old-k8s-version-156938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156938 -n old-k8s-version-156938: exit status 2 (452.944224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-156938 -n old-k8s-version-156938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-156938 -n old-k8s-version-156938: exit status 2 (362.603902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-156938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-156938 --alsologtostderr -v=1: (1.115899293s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-156938 -n old-k8s-version-156938
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-156938 -n old-k8s-version-156938
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m17.852471127s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-031797 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 12:33:25.872499  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-031797 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m1.935423998s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031797 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [feda7aac-c2a3-4f55-97e1-474b08dccf29] Pending
helpers_test.go:352: "busybox" [feda7aac-c2a3-4f55-97e1-474b08dccf29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [feda7aac-c2a3-4f55-97e1-474b08dccf29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003634989s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031797 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-031797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-031797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.038190179s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-031797 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-031797 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-031797 --alsologtostderr -v=3: (12.138791936s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-998180 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [743f24fc-0932-4fd1-a469-c4be68cbcf90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [743f24fc-0932-4fd1-a469-c4be68cbcf90] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003995699s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-998180 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.336960785s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-998180 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031797 -n embed-certs-031797
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031797 -n embed-certs-031797: exit status 7 (106.592675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-031797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-031797 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-031797 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.068797768s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-031797 -n embed-certs-031797
E0929 12:34:48.950701  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-998180 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-998180 --alsologtostderr -v=3: (11.966562931s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998180 -n no-preload-998180
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998180 -n no-preload-998180: exit status 7 (133.287562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-998180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.300976041s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998180 -n no-preload-998180
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mg7t6" [0375ea2e-56e6-4d4c-8116-5406dccdc282] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003578429s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mg7t6" [0375ea2e-56e6-4d4c-8116-5406dccdc282] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.035724802s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-031797 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-031797 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-031797 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-031797 --alsologtostderr -v=1: (1.078847901s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031797 -n embed-certs-031797
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031797 -n embed-certs-031797: exit status 2 (349.011295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031797 -n embed-certs-031797
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031797 -n embed-certs-031797: exit status 2 (336.798453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-031797 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-031797 -n embed-certs-031797
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-031797 -n embed-certs-031797
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fktf2" [25fc382c-b3ca-46de-b4e5-c0b8b76446f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006482169s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-226049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-226049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m30.546748449s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fktf2" [25fc382c-b3ca-46de-b4e5-c0b8b76446f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003600361s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-998180 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998180 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-998180 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-998180 --alsologtostderr -v=1: (1.028527098s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998180 -n no-preload-998180
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998180 -n no-preload-998180: exit status 2 (375.316907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998180 -n no-preload-998180
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998180 -n no-preload-998180: exit status 2 (379.538928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-998180 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998180 -n no-preload-998180
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998180 -n no-preload-998180
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-027849 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 12:35:52.433802  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.440143  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.451478  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.472830  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.514180  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.596425  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:52.757886  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:53.079175  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:53.720511  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:55.002097  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:35:57.563703  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:36:02.685188  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-027849 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (41.694331161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-027849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-027849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0625664s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-027849 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-027849 --alsologtostderr -v=3: (1.236307846s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-027849 -n newest-cni-027849
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-027849 -n newest-cni-027849: exit status 7 (70.689788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-027849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-027849 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 12:36:12.926963  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-027849 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.938004004s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-027849 -n newest-cni-027849
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-027849 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-027849 --alsologtostderr -v=1
E0929 12:36:24.256276  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-027849 -n newest-cni-027849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-027849 -n newest-cni-027849: exit status 2 (330.037072ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-027849 -n newest-cni-027849
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-027849 -n newest-cni-027849: exit status 2 (310.487111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-027849 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-027849 -n newest-cni-027849
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-027849 -n newest-cni-027849
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0929 12:36:33.408813  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m23.139201688s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-226049 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5ea009b6-4938-4148-bb30-37c5d6e5165e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5ea009b6-4938-4148-bb30-37c5d6e5165e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004991122s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-226049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-226049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-226049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.305682641s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-226049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-226049 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-226049 --alsologtostderr -v=3: (12.111547346s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049: exit status 7 (118.903978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-226049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-226049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0929 12:37:14.370693  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-226049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (49.541380877s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6grdp" [ce414d7b-eeec-4f15-9dbe-6aaa4daf83a0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003001156s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-800992 "pgrep -a kubelet"
I0929 12:37:52.923099  294425 config.go:182] Loaded profile config "auto-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mntv9" [c14afcf3-7025-46ea-9019-c8999cf9f4ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mntv9" [c14afcf3-7025-46ea-9019-c8999cf9f4ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.008683942s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6grdp" [ce414d7b-eeec-4f15-9dbe-6aaa4daf83a0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003483591s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-226049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-226049 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-226049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049: exit status 2 (491.831596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049: exit status 2 (402.184404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-226049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-226049 -n default-k8s-diff-port-226049
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)
E0929 12:46:35.012312  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:46:38.303195  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:46:55.493665  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:47:06.009545  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:47:17.993479  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:47:36.455018  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:47:53.176278  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.445018  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.451382  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.462803  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.484191  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.525733  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.607205  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:04.768653  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:05.090312  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:05.732351  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:07.013722  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:09.575425  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:14.697208  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:20.878117  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:24.938658  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:25.872448  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:45.421006  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:45.961028  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:48:58.377085  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:26.384173  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.134084  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.733349  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.739825  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.751302  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.772701  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.814132  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:34.895571  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:35.057097  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:35.379251  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:36.020537  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:37.301930  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:39.863791  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:44.986151  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:49:55.227582  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:01.835773  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:15.709056  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:48.306184  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:52.433729  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:50:56.671387  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.459684  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.466063  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.477447  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.498924  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.540404  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.621962  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:01.783635  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:02.105344  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:02.747626  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:04.029077  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:06.591302  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:11.713196  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:14.519543  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:21.954691  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:24.256182  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:28.952028  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:38.303024  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:42.223083  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/custom-flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:51:42.436754  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:15.494687  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:18.593458  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/flannel-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:23.398337  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:52:53.176275  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:53:04.445316  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:53:25.872459  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:53:32.147763  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/enable-default-cni-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:53:45.319997  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/bridge-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:53:45.961224  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0929 12:38:25.872167  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/addons-571100/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.122682265s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vblsr" [bc86add8-a2a7-4e8b-a26d-e4903e4674de] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006788243s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-800992 "pgrep -a kubelet"
I0929 12:39:40.434972  294425 config.go:182] Loaded profile config "kindnet-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wxwqr" [345d55a0-82d2-41c0-a217-3623e926a673] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wxwqr" [345d55a0-82d2-41c0-a217-3623e926a673] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003957487s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0929 12:40:52.433406  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:07.321525  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.426809791s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-800992 "pgrep -a kubelet"
I0929 12:41:14.268953  294425 config.go:182] Loaded profile config "custom-flannel-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tcx9x" [6e047352-90d8-471c-87d6-42471800c20c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tcx9x" [6e047352-90d8-471c-87d6-42471800c20c] Running
E0929 12:41:20.133308  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:24.257687  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/functional-686485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004004939s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0929 12:41:48.558338  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:41:58.799728  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:19.281936  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.176472  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.183136  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.194757  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.216245  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.257615  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.339083  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.500620  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:53.822682  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:54.465046  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:55.746463  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:42:58.308454  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:43:00.243716  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:43:03.430025  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.480729935s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-800992 "pgrep -a kubelet"
I0929 12:43:04.198358  294425 config.go:182] Loaded profile config "enable-default-cni-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9f48t" [022f279f-6359-4707-b7e9-6fe5eaca2bde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9f48t" [022f279f-6359-4707-b7e9-6fe5eaca2bde] Running
E0929 12:43:13.671457  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004099905s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0929 12:43:45.960820  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:13.664009  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/no-preload-998180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:15.115349  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:22.167867  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/default-k8s-diff-port-226049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.134138  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.140610  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.151971  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.173432  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.214792  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.296726  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:34.458413  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.663054738s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-s8nf5" [870a156d-c3e1-428b-a230-a78c1ee7b735] Running
E0929 12:44:34.780058  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:35.421639  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:36.703088  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:44:39.264543  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004156659s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-800992 "pgrep -a kubelet"
I0929 12:44:41.042340  294425 config.go:182] Loaded profile config "flannel-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b8ntw" [660c11c3-e355-47f7-83b9-d9fc5d680da6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 12:44:44.386505  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-b8ntw" [660c11c3-e355-47f7-83b9-d9fc5d680da6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003192499s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0929 12:45:15.109897  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:37.036790  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/auto-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:52.433707  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/old-k8s-version-156938/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:45:56.072053  294425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-292570/.minikube/profiles/kindnet-800992/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-800992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (46.568975815s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-800992 "pgrep -a kubelet"
I0929 12:46:01.211130  294425 config.go:182] Loaded profile config "bridge-800992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-800992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rw9b9" [b8ba7860-691a-4316-a349-ad07275e610e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rw9b9" [b8ba7860-691a-4316-a349-ad07275e610e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004592994s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-800992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-800992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-274224 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-274224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-274224
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-571100 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-877866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-877866
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-800992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-800992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800992"

                                                
                                                
----------------------- debugLogs end: kubenet-800992 [took: 5.090444121s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-800992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-800992
--- SKIP: TestNetworkPlugins/group/kubenet (5.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-800992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-800992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-800992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-800992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800992"

                                                
                                                
----------------------- debugLogs end: cilium-800992 [took: 5.26845288s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-800992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-800992
--- SKIP: TestNetworkPlugins/group/cilium (5.47s)

                                                
                                    
Copied to clipboard